Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,800 | 6,248 | Wasserstein Training of
Restricted Boltzmann Machines
Gr?goire Montavon
Technische Universit?t Berlin
Klaus-Robert M?ller?
Technische Universit?t Berlin
[email protected]
[email protected]
Marco Cuturi
CREST, ENSAE, Universit? Paris-Saclay
[email protected]
Abstract
Boltzmann machines are able to learn highly complex, multimodal, structured
and multiscale real-world data distributions. Parameters of the model are usually
learned by minimizing the Kullback-Leibler (KL) divergence from training samples
to the learned model. We propose in this work a novel approach for Boltzmann
machine training which assumes that a meaningful metric between observations is
known. This metric between observations can then be used to define the Wasserstein
distance between the distribution induced by the Boltzmann machine on the one
hand, and that given by the training sample on the other hand. We derive a
gradient of that distance with respect to the model parameters. Minimization of this
new objective leads to generative models with different statistical properties. We
demonstrate their practical potential on data completion and denoising, for which
the metric between observations plays a crucial role.
1
Introduction
Boltzmann machines [1] are powerful generative models that can be used to approximate a large
class of real-world data distributions, such as handwritten characters [9], speech segments [7], or
multimodal data [16]. Boltzmann machines share similarities with neural networks in their capability
to extract features at multiple scales, and to build well-generalizing hierarchical data representations
[15, 13]. The restricted Boltzmann machine (RBM) is a special type of Boltzmann machine composed
of one layer of latent variables, and defining a probability distribution p? (x) over a set of d binary
observed variables whose state is represented by the binary vector x ? {0, 1}d , and with a parameter
vector ? to be learned.
PN
Given an empirical probability distribution p?(x) = N1 n=1 ?xn where (xn )n is a list of N observations in {0, 1}d , an RBM can be trained using information-theoretic divergences (see for example
[12]) by minimizing with respect to ? a divergence ?(?
p, p? ) between the sample empirical measure p?
and the modeled distribution p? . When ? is for instance the KL divergence, this approach results in
the well-known Maximum Likelihood Estimator (MLE), which yields gradients for the ? of the form
N
1 X
?? KL(?
p k p? ) = ?
?? log p? (xn ) = ? ?? log p? (x) p?,
N n=1
(1)
where the bracket notation h?ip indicates an expectation with respect to p. Alternative choices for ?
are the Bhattacharrya/Hellinger and Euclidean distances between distributions, or more generally
?
Also with the Department of Brain and Cognitive Engineering, Korea University.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
F -divergences or M -estimators [10]. They all result in comparable gradient terms, that try to adjust
? so that the fitting terms p? (xn ) grow as large as possible.
We explore in this work a different scenario: what if ? is chosen so that p? (x) is large, on average,
when x is close to a data point xn in some sense, but not necessarily when x coincides exactly with
xn ? To adopt such a geometric criterion, we must first define what closeness between observations
means. In almost all applications of Boltzmann machines, such a metric between observations is
readily available: One can for example consider the Hamming distance between binary vectors, or
any other metric motivated by practical considerations2 . This being done, the geometric criterion
we have drawn can be materialized by considering for ? the Wasserstein distance [20] (a.k.a. the
Kantorovich or the earth mover?s distance [14]) between measures. This choice was considered in
theory by [2], who proved its statistical consistency, but was never considered practically to the best
of our knowledge. This paper describes a practical derivation for a minimum Kantorovich distance
estimator [2] for Boltzmann machines, which can scale up to tens of thousands of observations. As
will be described in this paper, recent advances in the fast approximation of Wasserstein distances [5]
and their derivatives [6] play an important role in the practical implementation of these computations.
Before describing this approach in detail, we would like to insist that measuring goodness-of-fit
with the Wasserstein distance results in a considerably different perspective than that provided by a
Kullback-Leibler/MLE approach. This difference is illustrated in Figure 1, where a probability p? can
be close from a KL perspective to a given empirical measure p?, but far from the same measure p in
the Wasserstein sense. Conversely, a different probability p?0 can miss the mark from a KL viewpoint
but achieve a low Wasserstein distance to p?. Before proceeding to the rest of this paper, let us mention
that Wasserstein distances have a broad appeal for machine learning. That distance was for instance
introduced in the context of supervised inference by [8], who used it to compute a predictive loss
between the output of a multilabel classifier against its ground truth, or for domain adaptation, by [4].
Model 1
Model 2
large overlap
Data distribution
small overlap
low
large distance
small distance
high
high
low
Figure 1: Empirical distribution p?(x) (gray) defined on the set of states {0, 1}d with d = 3 superposed
to two possible models of it defined on the same set of states. The size of the circles indicates the
probability mass for each state. For each model, we show its KL and Wasserstein divergences from
p?(x), and an explanation of why the divergences are low or high: a large/small overlap with p?(x), or
a large/small distance from p?(x).
2
Minimum Wasserstein Distance Estimation
d
Consider two probabilities
P p, q in P(X
P ), the set of probabilities on X = {0, 1} . Namely, two maps
p, q in RX
such
that
p(x)
=
q(x)
=
1,
where
we
omit
x
?
X
under
the summation sign.
+
x
x
Consider a cost function defined on X ? X , typically a distance D : X ? X ? R+ . Given a constant
? ? 0, the ?-smoothed Wasserstein distance [5] is equal to
W? (p, q) =
min hD(x, x0 )i? ? ?H(?),
???(p,q)
(2)
P
where
?(p, q) is the set of joint probabilities
? on X ? X such that x0 ?(x, x0 ) = p(x),
P
P
0
0
0
0
x ?(x, x ) = q(x ) and H(?) = ?
xx0 ?(x, x ) log ?(x, x ) is the Shannon entropy of ?.
This optimization problem, a strictly convex program, has an equivalent dual formulation [6] which
involves instead two real-valued functions ?, ? in RX and which plays an important role in this paper:
X 1
0
0
W? (p, q) = max h?(x)ip + h?(x0 )iq ? ?
e ? (?(x)+?(x )?D(x,x ))?1 .
(3)
?,??RX
xx0
2
When using the MLE principle, metric considerations play a key role to define densities p? , e.g. the reliance
of Gaussian densities on Euclidean distances. This is the kind of metric we take for granted in this work.
2
Smooth Wasserstein Distances The ?true? Wasserstein distance corresponds to the case where
? = 0, that is when Equation (2) is stripped of the entropic term. One can easily verify that that
definition matches the usual linear program used to describe Wasserstein/EMD distances [14]. When
? ? 0 in Equation (3), one also recovers the Kantorovich dual formulation, because the rightmost
regularizer converges to the indicator function of the feasible set of the dual optimal transport problem,
?(x) + ?(x0 ) ? D(x, x0 ). We consider in this paper the case ? > 0 because it was shown in [5] to
considerably facilitate computations, and in [6] to result in a divergence W? (p, q) which, unlike the
case ? = 0, is differentiable w.r.t to the first variable. Looking at the dual formulation in Equation (3),
one can see that this gradient is equal to ?? , the centered optimal dual variable (the centering step for
?? ensures the orthogonality with respect to the simplex constraint).
Sensitivity analysis gives a clear interpretation to the quantity ?? (x): It measures the cost for each
unit of mass placed by p at x when computing the Wasserstein distance W? (p, q). To decrease
W? (p, q), it might thus be favorable to transfer mass in p from points where ?(x) is high to place
it on points where ?(x) is low. This idea can be used, by a simple application of the chain rule, to
minimize, given a fixed target probability p, the quantity W? (p? , p) with respect to ?.
Proposition 1. Let p? (x) = Z1 e?F? (x) be a parameterized family of probability distributions where
F? (x) is a differentiable function of ? ? ? and we write G? = h?? F? (x)ip? . Let ?? be the centered
optimal dual solution of W? (p? , p) as described in Equation (3). The gradient of the smoothed
Wasserstein distance with respect to ? is given by
?? W? (p? , p) = ?? (x) p G? ? ?? (x)?? F? (x)) p .
(4)
?
?
Proof. This result is a direct application of the chain rule: We have
?p T ?W (p , q)
?
? ?
?? W? (p? , p) =
.
??
?p?
As mentioned in [6], the rightmost term is the optimal dual variable (the Kantorovich potential)
?W? (p? , q)/?p? = ?? . The Jacobian (?p? /??) is a linear map ? ? X . For a given x0 ,
?p? (x0 )/?? = p? (x0 )G? ? ?F? (x0 )p? (x0 ).
?
As a consequence, ?p
??
results in Equation (4).
T
?? is the integral w.r.t. x0 of the term above multiplied by ?? (x0 ), which
Comparison with the KL Fitting Error The target distribution p plays a direct role in the formation of the gradient of KL(?
p k p? ) w.r.t. ? through the term h?? F? (x)ip in Equation (1). The
Wasserstein gradient incorporates the knowledge of p in a different way, by considering, on the
support of p? only, points x that correspond to high potentials (costs) ?(x) when computing the
distance of p? to p. A high potential at x means that the probability p? (x) should be lowered if one
were to decrease W? (p? , p), by varying ? accordingly.
Sampling Approximation The gradient in Equation (4) is intractable, since it involves solving
an optimal (smoothed) transport problem over probabilities defined on 2d states. In practice, we
replace expectations w.r.t p? by an empirical distribution formed by sampling from the model p?
e generated by the model, we define
(e.g. the PCD sample [18]). Given a sample (e
xn )n of size N
PNe
e . The tilde is used to differentiate the sample generated by the model from the
p?? = n=1 ?xe n /N
empirical observations. Because the dual potential ?? is centered and p?? is a measure with uniform
weights, h?? (x)ip?? = 0 which simplifies the approximation of the gradient to
e
N
1 X ?
b
?? W? (p? , p?) = ?
?
? (e
xn ) ?? F? (e
xn )
e
N
n=1
(5)
where ?
? ? is the solution of the discrete smooth Wasserstein dual between the two empirical distribue . In practical terms, ?
tions p? and p?? , which have respectively supports of size N and N
? ? is a vector
e
of size N , one coefficient for each PCD sample, which can be computed by following the algorithm
below [6]. To keep notations simple, we describe it in terms of generic probabilities p and q, having
in mind these are in practice the training and simulated empirical measures p? and p?? .
3
Computing ?? When ? > 0, the optimal variable ?? corresponding to W? (p, q) can be recovered
through the Sinkhorn algorithm with
P a cost which grows as the product |p||q| of the sizes of the
support of p and q, where |p| = x 1p(x)>0 . The algorithm is well known but we adapt it here
to our setting, see [6, Alg.3] for a more precise description. To ease notations, we consider an
arbitrary ordering of X , a set of cardinal 2d , and identify its elements with indices 1 ? i ? 2d . Let
I = (i1 , ? ? ? , i|p| ) be the ordered family of indices in the set {i | p(i) > 0} and define J accordingly
for q. I and J have respective lengths |p| and |q|. Form the matrix K = [e?D(i,j)/? ]i?I,j?J of size
|p|
|q|
|p| and |q|. Choose now two positive vectors u ? R++ and v ? R++ at random, and repeat until
u, v converge in some metric the operations u ? p/(Kv), v ? q/(K T u). Upon convergence, the
optimal variable ?? is zero everywhere except for ?? (ia ) = log(ua /?
u)/? where 1 ? a ? |p| and u
?
is the geometric mean of vector u (which ensures that ?? is centered).
3
Application to Restricted Boltzmann Machines
The restricted Boltzmann machine (RBM) is a generative model of binary data that is composed of d
binary observed variables and h binary explanatory variables. The vector x ? {0, 1}d represents the
state of observed variables, and the vector y ? {0, 1}h represents the state of explanatory variables.
The RBM associates to each configuration x of observed variables a probability p? (x) defined as
P
Ph
p? (x) = y e?E? (x,y) /Z? , where E? (x, y) = ?aT x ? j=1 yj (wTj x + bj ) is called the energy
P
and Z? = x,y e?E? (x,y) is the partition function that normalizes the probability distribution to
one. The parameters ? = (a, {wj , bj }hj=1 ) of the RBM are learned from the data. Knowing the state
x of the observed variables, the explanatory variables are independent Bernoulli-distributed with
Pr(yj = 1|x) = ?(wTj x + bj ), where ? is the logistic map z 7? (1 + e?z )?1 . Conversely, knowing
the state y of the explanatory variables, the observed variables on which the probability distribution is
defined can also be sampled independently, leading to an efficient alternate Gibbs sampling procedure
for p? . In this RBM model, explanatory variables can be analytically marginalized, which yields the
following probability model:
p? (x) = e?F? (x) /Z?0 ,
P
h
where F? (x) = ?aT x ? j=1 log(1 + exp(wTj x + bj )) is the free energy associated to this model
P
and Z?0 = x e?F? (x) is the partition function.
Wasserstein Gradient of the RBM Having written the RBM in its free energy form, the Wasserstein gradient can be obtained by computing the gradient of F? (x) and injecting it in Equation (5):
b w W? (?
?
p, p? ) = ?? (x) ?(zj ) x p? ,
j
?
wTj x
where zj =
+ bj . Gradients with respect to parameters a and {bj }j can also be obtained by
the same means. In comparison, the gradient of the KL divergence is given by
b w KL(?
?
p k p? ) = ?(zj ) x p? ? ?(zj ) x p?.
j
?
While the Wasserstein gradient can in the same way as the KL gradient be expressed in a very simple
form, the first one is not sum-decomposable. A simple manifestation of the non-decomposability
e = 1 (smallest possible sample size): In that case, ?(e
occurs for N
xn ) = 0 due to the centering
constraint (see Section 2), thus making the gradient zero.
Stability and KL Regularization Unlike the KL gradient, the Wasserstein gradient depends only
indirectly on the data distribution p?. This is a problem when the sample p?? generated by the model
strongly differs from the examples coming from p?, because there is no weighting (?(e
xn ))n of the
generated sample that can represent the desired direction in the parameter space ?. In that case,
the Wasserstein gradient will point to a bad local minimum. Closeness between the two empirical
samples from this optimization perspective can be ensured by adding a regularization term to the
objective that incorporates both the usual quadratic containment term, but also the KL term, that
forces proximity to p? due to the direct dependence of its gradient on it. The optimization problem
becomes:
P
min W? (?
p, p? ) + ? ? ?(?)
with
?(?) = KL(?
p k p? ) + ? ? (kak2 + j kwj k2 )
???
4
starting at point ?0 = arg min??? ?(?), and where ?, ? are two regularization hyperparameters that
must be selected. Determining the starting point ?0 is analogous to having an initial pretraining phase.
Thus, the proposed Wasserstein procedure can also be seen as finetuning a standard RBM, and forcing
the finetuning not to deviate too much from the pretrained solution.
4
Experiments
We perform several experiments that demonstrate that Wasserstein-trained RBMs learn distributions
that are better from a metric perspective. First, we explore what are the main characteristics of a
learned distribution that optimizes the Wasserstein objective. Then, we investigate the usefulness of
these learned models on practical problems, such as data completion and denoising, where the metric
between observations occurs in the performance evaluation. We consider three datasets: MNIST-small,
a subsampled version of the original MNIST dataset [11] with only the digits ?0? retained, a subset of
the UCI PLANTS dataset [19] containing the geographical spread of plants species, and MNIST-code,
128-dimensional code vectors associated to each MNIST digit (additional details in the supplement).
4.1
Training, Validation and Evaluation
All RBM models that we investigate are trained in full batch mode, using for p?? the PCD approximation [18] of p? , where the sample is refreshed at each gradient update by one step of alternate
Gibbs sampling, starting from the sample at the previous time step. We choose a PCD sample of
e ). The coefficients ?1 , . . . , ? e occurring in the Wasserstein
same size as the training set (N = N
N
gradient are obtained by solving the smoothed Wasserstein dual between p? and p?? , with smoothing
0
0
0
parameter ? = 0.1 and distance D(x, x ) = H(x, x )/hH(x, x )ip?, where H denotes the Hamming distance between two binary vectors. We use the centered parameterization of the RBM for
gradient descent [13, 3]. We perform holdout validation on the quadratic containment coefficient
? ? {10?4 , 10?3 , 10?2 }, and on the KL weighting coefficient ? ? {0, 10?1 , 100 , 101 , ?}. The
number of hidden units of the RBM is set heuristically to 400 for all datasets. The learning rate is
set heuristically to 0.01(??1 ) during the pretraining phase and modified to 0.01 min(1, ??1 ) when
training on the final objective. The Wasserstein distance W? (?
p? , p?) is computed between the whole
test distribution and the PCD sample at the end of the training procedure. This sample is a fast
approximation of the true unbiased sample, that would otherwise have to be generated by annealing
or enumeration of the states (see the supplement for a comparison of PCD and AIS samples).
4.2
Results and Analysis
The contour plots of Figure 2 show the effect of hyperparameters ? and ? on the Wasserstein distance.
For ? = ?, only the KL regularizer is active, which is equivalent to minimizing a standard RBM. As
we reduce the amount of regularization, the Wasserstein distance becomes effectively minimized and
thus smaller. If ? is chosen too small, the Wasserstein distance increases again, for the stability reasons
mentioned in Section 3. In all our experiments, we observed that KL pretraining was necessary in
order to reach low Wasserstein distance. Not doing so leads to degenerate solutions. The relation
between hyperparameters and minimization criteria is consistent across datasets: In all cases, the
Wasserstein RBM produces lower Wasserstein distance than a standard RBM.
RBM
1e-3
1e-4
RBM-W
0
0.1 1.0 10.0
Parameter ?
inf
MNIST-code
1e-2
RBM
1e-3
1e-4
RBM-W
0
0.1 1.0 10.0
Parameter ?
inf
Parameter ?
PLANTS
Parameter ?
Parameter ?
MNIST-small
1e-2
1e-2
high
RBM
1e-3
1e-4
RBM-W
0
0.1 1.0 10.0
Parameter ?
inf
low
Figure 2: Wasserstein distance as a function of hyperparameters ? and ?. The best RBMs in the
Wasserstein sense (RBM-W) are shown in red. The best RBMs in the standard sense (i.e. with ?
forced to +inf, and minimum KL) are shown in blue.
Samples generated by the standard RBM and the Wasserstein RBM (more precisely their PCD
approximation) are shown in Figure 3. The RBM-W produces a reduced set of clean prototypical
examples, with less noise than those produced by a regular RBM. All zeros generated by RBM-W
5
have well-defined contours and a round shape but do not reproduce the variety of shapes present in the
data. Similarly, the plants territorial spreads generated by the RBM-W form compact and contiguous
regions that are prototypical of real spreads, but are less diverse than the data or the sample generated
by the standard RBM. Finally, the RBM-W generates codes that, when decoded, are closer to actual
MNIST digits.
MNIST-small
PLANTS
MNIST-code
MNIST-code digits generation
400 binary units
RBM-
128 binary units
128 binary units
100
100
200
200
28x28 pixels
28x28 pixels
RBM-W
RBM
RBM
400 binary units
Figure 3: Examples generated by the standard and the Wasserstein RBMs. (Images for PLANTS
dataset are automatically generated from the Wikimedia Commons template https://commons.
wikimedia.org/wiki/File:BlankMap-USA-states-Canada-provinces.svg created by
user Lokal_Profil.) Images for MNIST-code are produced by the decoders shown on the right.
The PCA plots of Figure 4 superimpose to the true data distribution (in gray) the distributions
generated by the standard RBM (in blue) and the Wasserstein RBM (in red). In particular, the plots
show the projected distributions on the first two PCA components of the true distribution. While the
standard RBM distribution uniformly covers the data, the one generated by the RBM-W consists of a
finite set of small dense clusters that are scattered across the input distribution. In other words, the
Wasserstein model is biased towards these clusters, and systematically ignores other regions.
MNIST-small
data
RBM
? small ?
PLANTS
data
RBM-W
? ? large
data
RBM
MNIST-code
data
RBM-W
? small ?
? ? large
data
RBM
? small ?
data
RBM-W
? ? large
Figure 4: Top: Two-dimensional PCA comparison of distributions learned by the RBM and the RBMW with smoothing parameter ? = 0.1. Plots are obtained by projecting the learned distributions on
the first two components of the true distribution. Bottom: RBM-W distributions obtained by varying
the parameter ?.
At the bottom of Figure 4, we analyze the effect of the Wasserstein smoothing parameter ? on the
learned distribution, with ? = 0.025, 0.05, 0.1, 0.2, 0.4. We observe on all datasets that the stronger
the smoothing, the stronger the shrinkage effect. Although the KL-generated distributions shown in
blue may look better (the red distribution strongly departs visually from the data distribution), the
red distribution is actually superior if considering the smooth Wasserstein distance as a performance
metric, as shown in Figure 2.
4.3
Validating the Shrinkage Effect
To verify that the shrinkage effect observed in Figure 4 is not a training artefact, but a truly expected
property of the modeled distribution, we analyze this effect for a simple distribution for which the
parameter space can be enumerated. Figure 5 plots the Wasserstein distance between samples of
size 100 of a 10-dimensional Gaussian distribution p ? N (0, I), and a parameterized model of that
distribution p? ? N (0, ?2 I), where ? ? [0, 1]. The parameter ? can be interpreted as a shrinkage
6
parameter. The Wasserstein distance is computed using the cityblock or euclidean metric, both
rescaled such that the expected distance between pairs of points is 1.
1.0
0.6
model parameter
0.8
= 1.00
= 0.32
= 0.10
= 0.03
= 0.01
= 0.00
0.9
(metric = euclidean)
?
?
?
?
?
?
W? (?
p? , p?)
?
?
?
?
?
?
W? (?
p? , p?)
1.0
(metric = cityblock)
0.9
0.8
0.7
0.6
0.5
0.4
0.0
0.2
0.4
0.8
0.7
0.6
0.5
0.0
1.0
0.2
0.4
0.6
model parameter
?
0.8
= 1.00
= 0.32
= 0.10
= 0.03
= 0.01
= 0.00
1.0
?
Figure 5: Wasserstein distance between a sample p? ? N (0, I), and a sample p?? ? N (0, ?2 I) for
various model parameters ? ? [0, 1] and smoothing ?, using the cityblock or the euclidean metric.
Interestingly, for all choices of Wasserstein smoothing parameters ?, and even for the true Wasserstein
distance (? = 0, computed here using the OpenCV library), the best model p? in the empirical
Wasserstein sense is a shrinked version of p (i.e. with ? < 1). When the smoothing is strong
enough, the best parameter becomes ? = 0 (i.e. Dirac distribution located at the origin). Overall, this
experiment gives a training-independent validation for our observation that Wasserstein RBMs learn
shrinked cluster-like distributions. Note that the finite sample size prevents the Wasserstein distance
to reach zero, and always favors shrinked models.
4.4
Data Completion and Denoising
In order to demonstrate the practical relevance of Wasserstein distance minimization, we apply the
learned models to the task of data completion and data denoising, for which the use of a metric is
crucial: Data completion and data denoising performance is generally measured in terms of distance
between the true data and the completed or denoised data (e.g. Euclidean distance for real-valued data,
or Hamming distance H for binary data). Remotely located probability mass that may result from
simple KL minimization would incur a severe penalty on the completion and denoising performance
metric. Both tasks have useful practical applications: Data completion can be used as a first step
when applying discriminative learning (e.g. neural networks or SVM) to data with missing features.
Data denoising can be used as a dimensionality reduction step before training a supervised model.
Let the input x = [v, h] be composed of d ? k visible variables v and k hidden variables h.
Data Completion The setting of the data completion experiment is illustrated in Figure 6 (top).
The distribution p? (x|v) over possible reconstructions can be sampled from using an alternate Gibbs
sampler, or by enumeration. The expected Hamming distance between the true state x? and the
reconstructed state modeled by the distribution p? (x|v) is given by iterating on the 2k possible
reconstructions:
P
E = h p? (x | v) ? H(x, x? )
where h ? {0, 1}k . Since the reconstruction is a probability distribution, we can compute the
expected Hamming error, but also its bias-variance decomposition. On MNIST-small, we hide
randomly located image patches of size 3 ? 3 (i.e. k = 9). On PLANTS and MNIST-code, we hide
random subsets of k = 9 variables. Results are shown in Figure 7 (left), where we compare three
types of models: Kernel density estimation (KDE), standard RBM (RBM) and Wasserstein RBM
(RBM-W). The KDE estimation model uses a Gaussian kernel, with the Gaussian scale parameter
chosen such that the KL divergence of the model from the validation data is minimized. The RBM-W
is better or comparable the other models. Of particular interest is the structure of the expected
Hamming error: For the standard RBM, a large part of the error comes from the variance (or entropy),
while for the Wasserstein RBM, the bias term is the most contributing. This can be related to what is
observed in Figure 4: For a data point outside the area covered by the red points, the reconstruction is
systematically redirected towards the nearest red cluster, thus, incurring a systematic bias.
Data Denoising Here, we consider a simple noise process where for a predefined subset of k
variables, denoted by h a known number l of bits flips occur randomly. Remaining d ? k variables
are denoted by v. The setting of the experiment is illustrated in Figure 6 (bottom). Calling x? the
e its noisy version resulting from flipping l variables of h, the expected Hamming error
original and x
7
Completion
Denoising
hide three
pixels
original
image
assign
pixels again
0.4
1
incomplete
image
0.1
2
0
2
0.5
0
0
2
?ip two
pixels again
?ip two
pixels
original
image
8 possible image reconstructions
6 possible image reconstructions
0.4
2
noisy
image
0
2
0
4
0
2
0.6
0
0
2
Figure 6: Illustration of the completion and denoising setup. For each image, we select a known
subset of pixels, that we hide (or corrupt with noise). Each possible reconstruction has a particular
Hamming distance to the original example. The expected Hamming error is computed by weighting
the Hamming distances by the probability that the model assigns to the reconstructions.
????????? Denoising ?????????
????????? Completion ?????????
Hamming distance
MNIST-small
variance
bias
1.5
1.0
PLANTS
MNIST-code
variance
bias
1.5
1.0
0.5
0.5
0.0
0.0
KDE
RBM RBM-W
MNIST-small
variance
bias
1.5
1.5
1.0
0.5
KDE
RBM RBM-W
0.0
KDE
variance
bias
RBM RBM-W
PLANTS
1.5
1.0
1.0
0.5
0.5
0.0
0.0
KDE
RBM RBM-W
MNIST-code
variance
bias
variance
bias
1.5
1.0
0.5
KDE
RBM RBM-W
0.0
KDE
RBM RBM-W
Figure 7: Performance on the completion and denoising tasks of the kernel density estimation, the
standard RBM and the Wasserstein RBM. The total length of the bars is the expected Hamming error.
Dark gray and light gray sections of the bars give the bias-variance decomposition.
k
l
e:
states x with same visible variables v and that are at distance l of x
P
?
e ) = l) ? H(x, x )
E = h p? (x | v, H(x, x
is given by iterating over the
where h ? {0, 1}k . Note that the original example x? is necessarily part of this set of states under
the noise model assumption. For the MNIST-small data, we choose randomly located images patches
of size 4 ? 3 or 3 ? 4 (i.e. k = 12), and generate l = 4 random bit flips within the selected patch.
For PLANTS and MNIST-code, we generate l = 4 bit flips in k = 12 randomly preselected input
variables. Figure 7 (right) shows the denoising error in terms of expected Hamming distance on the
same datasets. The RBM-W is better or comparable to other models. Like for the completion task,
the main difference between the two RBMs is the bias/variance ratio, where again the Wasserstein
RBM tends to have larger bias. This experiment has considered a very simple noise model consisting
of a fixed number of l random bit flips over a small predefined subset of variables. Denoising highly
corrupted complex data will however require to combine Wasserstein models with more flexible noise
models such as the ones proposed by [17].
5
Conclusion
We have introduced a new objective for restricted Boltzmann machines (RBM) based on the smooth
Wasserstein distance. We derived the gradient of the Wasserstein distance from its dual formulation,
and used it to effectively train an RBM. Unlike the usual Kullback-Leibler (KL) divergence, our
Wasserstein objective takes into account the metric of the data. In all considered scenarios, the Wasserstein RBM produced distributions that strongly departed from standard RBMs, and outperformed
them on practical tasks such as completion or denoising.
More generally, we demonstrated empirically, that when learning probability densities, the reliance on
distributions that incorporate indirectly the desired metric can be substituted for training procedures
that make the desired metric directly part of the learning objective. Thus, Wasserstein training can be
seen as a more direct approach to density estimation than regularized KL training. Future work will
aim to further explore the interface between Boltzmann learning and Wasserstein minimization, with
the aim to scale the newly proposed learning technique to larger and more complex data distributions.
8
Acknowledgements
This work was supported by the Brain Korea 21 Plus Program through the National Research Foundation of
Korea funded by the Ministry of Education. This work was also supported by the grant DFG (MU 987/17-1). M.
Cuturi gratefully acknowledges the support of JSPS young researcher A grant 26700002. Correspondence to
GM, KRM and MC.
References
[1] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive
Science, 9(1):147?169, 1985.
[2] F. Bassetti, A. Bodini, and E. Regazzini. On minimum Kantorovich distance estimators. Statistics &
Probability Letters, 76(12):1298 ? 1302, 2006.
[3] K. Cho, T. Raiko, and A. Ilin. Enhanced gradient for training restricted Boltzmann machines. Neural
Computation, 25(3):805?831, 2013.
[4] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 2016.
[5] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural
Information Processing Systems 26, pages 2292?2300, 2013.
[6] M. Cuturi and A. Doucet. Fast computation of Wasserstein barycenters. In Proceedings of the 31th
International Conference on Machine Learning, ICML, pages 685?693, 2014.
[7] G. E. Dahl, M. Ranzato, A. Mohamed, and G. E. Hinton. Phone recognition with the mean-covariance
restricted Boltzmann machine. In Advances in Neural Information Processing Systems 23., pages 469?477,
2010.
[8] C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. Poggio. Learning with a wasserstein loss. In NIPS,
pages 2044?2052. 2015.
[9] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[10] P. J. Huber. Robust statistics. Springer, 2011.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, Nov 1998.
[12] B. M. Marlin, K. Swersky, B. Chen, and N. de Freitas. Inductive principles for restricted Boltzmann
machine learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and
Statistics, AISTATS, pages 509?516, 2010.
[13] G. Montavon and K.-R. M?ller. Deep Boltzmann machines and the centering trick. In Neural Networks:
Tricks of the Trade - Second Edition, LNCS, pages 621?637. Springer, 2012.
[14] Y. Rubner, L. Guibas, and C. Tomasi. The earth mover?s distance, multi-dimensional scaling, and colorbased image retrieval. In Proceedings of the ARPA Image Understanding Workshop, pages 661?668,
1997.
[15] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the Twelfth International
Conference on Artificial Intelligence and Statistics, AISTATS, pages 448?455, 2009.
[16] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep Boltzmann machines. Journal of
Machine Learning Research, 15(1):2949?2980, 2014.
[17] Y. Tang, R. Salakhutdinov, and G. E. Hinton. Robust Boltzmann machines for recognition and denoising.
In IEEE Conference on Computer Vision and Pattern Recognition, pages 2264?2271, 2012.
[18] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In
Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML), pages 1064?1071,
2008.
[19] United States Department of Agriculture. The PLANTS Database, 2012.
[20] C. Villani. Optimal transport: old and new, volume 338. Springer Verlag, 2009.
9
| 6248 |@word version:3 stronger:2 villani:1 distribue:1 twelfth:1 heuristically:2 decomposition:2 covariance:1 contrastive:1 mention:1 reduction:1 initial:1 configuration:1 united:1 document:1 interestingly:1 rightmost:2 freitas:1 recovered:1 must:2 readily:1 written:1 visible:2 partition:2 shape:2 plot:5 update:1 generative:3 selected:2 intelligence:3 parameterization:1 accordingly:2 org:1 zhang:1 direct:4 redirected:1 ilin:1 consists:1 fitting:2 combine:1 hellinger:1 x0:13 huber:1 expected:9 multi:1 brain:2 salakhutdinov:3 insist:1 automatically:1 actual:1 enumeration:2 considering:3 ua:1 becomes:3 spain:1 provided:1 notation:3 mass:4 what:4 kind:1 interpreted:1 marlin:1 exactly:1 universit:3 classifier:1 ensured:1 k2:1 unit:6 grant:2 omit:1 before:3 positive:1 engineering:1 local:1 tends:1 consequence:1 might:1 plus:1 conversely:2 ease:1 practical:9 lecun:1 yj:2 practice:2 differs:1 digit:4 procedure:4 lncs:1 area:1 empirical:10 remotely:1 word:1 regular:1 close:2 context:1 applying:1 superposed:1 equivalent:2 map:3 demonstrated:1 missing:1 starting:3 independently:1 convex:1 decomposable:1 assigns:1 estimator:4 rule:2 hd:1 stability:2 analogous:1 target:2 play:5 gm:1 user:1 enhanced:1 us:1 origin:1 associate:1 element:1 trick:2 recognition:4 located:4 bodini:1 database:1 observed:9 role:5 bottom:3 ensae:2 ackley:1 thousand:1 wj:1 ensures:2 region:2 ranzato:1 ordering:1 decrease:2 rescaled:1 trade:1 mentioned:2 mu:1 cuturi:5 multilabel:1 trained:3 solving:2 segment:1 predictive:1 wtj:4 upon:1 incur:1 multimodal:3 joint:1 easily:1 finetuning:2 represented:1 various:1 bassetti:1 regularizer:2 derivation:1 train:1 forced:1 fast:3 describe:2 sejnowski:1 artificial:2 klaus:2 formation:1 outside:1 whose:1 larger:2 valued:2 otherwise:1 favor:1 statistic:4 noisy:2 ip:8 final:1 differentiate:1 differentiable:2 propose:1 reconstruction:8 product:2 coming:1 fr:1 adaptation:2 tu:2 uci:1 gregoire:1 degenerate:1 achieve:1 flamary:1 description:1 kv:1 dirac:1 convergence:1 cluster:4 produce:2 converges:1 tions:1 derive:1 iq:1 completion:15 measured:1 nearest:1 strong:1 involves:2 come:1 direction:1 artefact:1 centered:5 education:1 require:1 assign:1 proposition:1 summation:1 enumerated:1 strictly:1 marco:2 practically:1 proximity:1 considered:4 ground:1 guibas:1 exp:1 visually:1 bj:6 opencv:1 adopt:1 entropic:1 smallest:1 agriculture:1 earth:2 estimation:5 favorable:1 injecting:1 outperformed:1 minimization:5 gaussian:4 always:1 aim:2 modified:1 pn:1 hj:1 shrinkage:4 varying:2 derived:1 bernoulli:1 likelihood:2 indicates:2 sense:5 inference:1 mueller:1 typically:1 explanatory:5 hidden:2 relation:1 reproduce:1 i1:1 pixel:7 arg:1 dual:11 overall:1 flexible:1 denoted:2 smoothing:7 special:1 equal:2 never:1 having:3 emd:1 sampling:4 represents:2 broad:1 look:1 icml:2 future:1 simplex:1 minimized:2 cardinal:1 randomly:4 composed:3 divergence:12 mover:2 national:1 dfg:1 subsampled:1 phase:2 consisting:1 n1:1 interest:1 highly:2 investigate:2 evaluation:2 adjust:1 severe:1 truly:1 bracket:1 light:1 chain:2 predefined:2 integral:1 closer:1 necessary:1 poggio:1 korea:3 respective:1 incomplete:1 euclidean:6 old:1 circle:1 desired:3 regazzini:1 arpa:1 instance:2 contiguous:1 cover:1 tuia:1 measuring:1 goodness:1 cost:4 technische:2 decomposability:1 subset:5 rakotomamonjy:1 uniform:1 usefulness:1 jsps:1 gr:1 too:2 corrupted:1 considerably:2 cho:1 density:6 geographical:1 sensitivity:1 international:4 systematic:1 again:4 containing:1 choose:3 cognitive:2 expert:1 derivative:1 leading:1 account:1 potential:5 de:3 coefficient:4 depends:1 try:1 doing:1 analyze:2 red:6 denoised:1 capability:1 minimize:1 formed:1 variance:10 who:2 characteristic:1 yield:2 correspond:1 identify:1 handwritten:1 produced:3 mc:1 rx:3 researcher:1 reach:2 definition:1 centering:3 against:1 energy:3 rbms:7 mohamed:1 proof:1 rbm:72 recovers:1 hamming:13 associated:2 sampled:2 xx0:2 proved:1 dataset:3 holdout:1 newly:1 knowledge:2 dimensionality:1 actually:1 supervised:2 formulation:4 done:1 strongly:3 until:1 hand:2 transport:5 multiscale:1 logistic:1 mode:1 departed:1 gray:4 grows:1 facilitate:1 effect:6 usa:1 verify:2 true:8 unbiased:1 inductive:1 analytically:1 regularization:4 leibler:3 illustrated:3 round:1 during:1 coincides:1 criterion:3 manifestation:1 theoretic:1 demonstrate:3 interface:1 image:13 consideration:1 novel:1 common:2 superior:1 empirically:1 volume:1 interpretation:1 cityblock:3 gibbs:3 ai:1 consistency:1 similarly:1 gratefully:1 funded:1 lowered:1 similarity:1 sinkhorn:2 recent:1 hide:4 perspective:4 optimizes:1 inf:4 forcing:1 scenario:2 phone:1 verlag:1 binary:12 xe:1 seen:2 minimum:5 wasserstein:65 additional:1 ministry:1 converge:1 ller:2 multiple:1 full:1 smooth:4 match:1 adapt:1 x28:2 retrieval:1 mle:3 vision:1 metric:20 expectation:2 represent:1 kernel:3 thirteenth:1 annealing:1 grow:1 crucial:2 biased:1 rest:1 unlike:3 file:1 induced:1 validating:1 incorporates:2 bengio:1 enough:1 variety:1 fit:1 lightspeed:1 reduce:1 idea:1 simplifies:1 knowing:2 haffner:1 motivated:1 pca:3 granted:1 penalty:1 speech:1 pretraining:3 deep:3 generally:3 useful:1 clear:1 iterating:2 covered:1 amount:1 goire:1 dark:1 ten:1 ph:1 reduced:1 http:1 wiki:1 generate:2 zj:4 sign:1 refreshed:1 blue:3 diverse:1 territorial:1 write:1 discrete:1 key:1 reliance:2 drawn:1 clean:1 dahl:1 sum:1 parameterized:2 powerful:1 everywhere:1 letter:1 swersky:1 place:1 almost:1 family:2 patch:3 scaling:1 comparable:3 bit:4 layer:1 correspondence:1 quadratic:2 occur:1 orthogonality:1 constraint:2 pcd:7 precisely:1 calling:1 generates:1 min:4 structured:1 department:2 alternate:3 materialized:1 describes:1 smaller:1 across:2 character:1 making:1 projecting:1 restricted:9 pr:1 equation:8 describing:1 hh:1 mind:1 flip:4 end:1 available:1 operation:1 incurring:1 multiplied:1 apply:1 observe:1 hierarchical:1 generic:1 indirectly:2 alternative:1 batch:1 original:6 assumes:1 denotes:1 top:2 remaining:1 completed:1 marginalized:1 build:1 objective:7 quantity:2 pne:1 occurs:2 flipping:1 barycenter:1 dependence:1 usual:3 kak2:1 kantorovich:5 gradient:28 distance:56 berlin:4 simulated:1 decoder:1 reason:1 length:2 code:12 modeled:3 index:2 retained:1 illustration:1 minimizing:4 ratio:1 setup:1 robert:2 kde:8 implementation:1 svg:1 boltzmann:23 twenty:1 perform:2 observation:10 datasets:5 finite:2 descent:1 tilde:1 defining:1 hinton:5 looking:1 precise:1 smoothed:4 arbitrary:1 canada:1 superimpose:1 introduced:2 namely:1 paris:1 kl:24 pair:1 z1:1 tomasi:1 learned:10 barcelona:1 nip:2 able:1 bar:2 usually:1 below:1 pattern:2 saclay:1 program:3 preselected:1 max:1 explanation:1 ia:1 overlap:3 force:1 regularized:1 indicator:1 library:1 created:1 raiko:1 acknowledges:1 extract:1 deviate:1 geometric:3 acknowledgement:1 understanding:1 determining:1 contributing:1 loss:2 plant:12 araya:1 prototypical:2 generation:1 validation:4 foundation:1 rubner:1 consistent:1 principle:2 viewpoint:1 systematically:2 corrupt:1 share:1 normalizes:1 placed:1 repeat:1 free:2 supported:2 bias:12 stripped:1 template:1 fifth:1 distributed:1 xn:11 world:2 contour:2 ignores:1 projected:1 far:1 transaction:1 crest:1 approximate:1 compact:1 reconstructed:1 nov:1 kullback:3 keep:1 doucet:1 active:1 containment:2 discriminative:1 latent:1 why:1 learn:3 transfer:1 robust:2 alg:1 bottou:1 complex:3 necessarily:2 domain:2 substituted:1 aistats:2 main:2 spread:3 dense:1 whole:1 noise:6 hyperparameters:4 edition:1 courty:1 scattered:1 decoded:1 jacobian:1 weighting:3 montavon:3 young:1 tang:1 departs:1 bad:1 mobahi:1 list:1 appeal:1 svm:1 shrinked:3 closeness:2 frogner:1 intractable:1 workshop:1 mnist:21 adding:1 effectively:2 supplement:2 province:1 occurring:1 chen:1 entropy:2 generalizing:1 explore:3 prevents:1 expressed:1 ordered:1 kwj:1 pretrained:1 springer:3 corresponds:1 truth:1 tieleman:1 towards:2 krm:1 replace:1 feasible:1 except:1 uniformly:1 sampler:1 denoising:16 miss:1 called:1 specie:1 total:1 shannon:1 meaningful:1 select:1 mark:1 support:4 relevance:1 incorporate:1 srivastava:1 |
5,801 | 6,249 | Human Decision-Making under Limited Time
Pedro A. Ortega
Department of Psychology
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Alan A. Stocker
Department of Psychology
University of Pennsylvania
Philadelphia, PA 19014
[email protected]
Abstract
Subjective expected utility theory assumes that decision-makers possess unlimited
computational resources to reason about their choices; however, virtually all decisions in everyday life are made under resource constraints?i.e. decision-makers
are bounded in their rationality. Here we experimentally tested the predictions
made by a formalization of bounded rationality based on ideas from statistical
mechanics and information-theory. We systematically tested human subjects in
their ability to solve combinatorial puzzles under different time limitations. We
found that our bounded-rational model accounts well for the data. The decomposition of the fitted model parameter into the subjects? expected utility function and
resource parameter provide interesting insight into the subjects? information capacity limits. Our results confirm that humans gradually fall back on their learned
prior choice patterns when confronted with increasing resource limitations.
1
Introduction
Human decision-making is not perfectly rational. Most of our choices are constrained by many factors such as perceptual ambiguity, time, lack of knowledge, or computational effort [6]. Classical
theories of rational choice do not apply in such cases because they ignore information-processing
resources, assuming that decision-makers always pick the optimal choice [10]. However, it is well
known that human choice patterns deviate qualitatively from the perfectly rational ideal with increasing resource limitations.
It has been suggested that such limitations in decision-making can be formalized using ideas from
statistical mechanics [9] and information theory [16]. These frameworks propose that decisionmakers act as if their choice probabilities were an optimal compromise between maximizing the
expected utility and minimizing the KL-divergence from a set of prior choice probabilities, where
the trade-off is determined by the amount of available resources. This optimization scheme reduces
the decision-making problem to the inference of the optimal choice from a stimulus, where the likelihood function results from a combination of the decision-maker?s subjective preferences and the
resource limitations.
The aim of this paper is to systematically validate the model of bounded-rational decision-making
on human choice data. We conducted an experiment in which subjects had to solve a sequence
of combinatorial puzzles under time pressure. By manipulating the allotted time for solving each
puzzle, we were able to record choice data under different resource conditions. We then fit the
bounded-rational choice model to the dataset, obtaining a decomposition of the choice probabilities
in terms of a resource parameter and a set of stimulus-dependent utility functions. Our results show
that the model captures very well the gradual shifts due to increasing time constraints that are present
in the subjects? empirical choice patterns.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
2
A Probabilistic Model of Bounded-Rational Choices
We model a bounded-rational decision maker as an expected utility maximizer that is subject to
information constraints. Formally, let X and Y be two finite sets, the former corresponding to a
set of stimuli and the latter to a set of choices; and let P (y) be a prior distribution over optimal
choices y ? Y that the decision-maker may have learned from experience. When presented with a
stimulus x ? X , a bounded-rational decision-maker transforms the prior choice probabilities P (y)
into posterior choice probabilities P (y|x) and then generates a choice according to P (y|x).
This transformation is modeled as the optimization of a regularized expected utility known as the
free energy functional:
X
1X
Q(y|x)
F Q(y|x) :=
Q(y|x)Ux (y) ?
Q(y|x) log
,
?
P (y)
y
y
|
{z
} |
{z
}
Expected Utility
(1)
Regularization
where the posterior is defined as the maximizer P (y|x) := arg maxQ(y|x) F [Q(y|x)]. Crucially,
the optimization is determined by two factors. The first is the decision-maker?s subjective utility
function Ux : Y ? R encoding the desirability of a choice y given a stimulus x. The second is
the inverse temperature ?, which determines the resources of deliberation available for the decisiontask1 , but which are neither known to, nor controllable by the decision-maker. The resulting posterior
has an analytical expression given by the Gibbs distribution
P (y|x) =
1
P (y) exp ?Ux (y) ,
Z? (x)
(2)
where Z? (x) is a normalizing constant [9]. The expression (2) highlights a connection to inference: bounded-rational decisions can also be computed via Bayes? rule in which the likelihood is
determined by ? and Ux as follows:
P (y)P (x|y)
,
0
0
y 0 P (y )P (x|y )
P (y|x) = P
hence P (x|y) ? exp ?Ux (y) .
(3)
The objective function (1) can be motivated as a trade-off between maximizing expected utility and
minimizing information cost [9, 16]. Near-zero values of ?, which correspond to heavily-regularized
decisions, yield posterior choice probabilities that are similar to the prior. Conversely, with growing
values of ?, the posterior choice probabilities approach the perfectly-rational limit.
Connection to regret. Bounded-rational decision-making is related to regret theory [2, 4, 8]. To
see this, define the certainty-equivalent as the maximum attainable value for (1):
n
o
1
Ux? := max F Q(y|x) = log Z? (x).
(4)
?
Q(y|x)
The certainty-equivalent quantifies the net worth of the stimulus x prior to making a choice. The decision process treats (4) as a reference utility used in the assessment of the alternatives. Specifically,
the modulation of any choice is obtained by measuring up the utility against the certainty-equivalent:
h
i
P (y|x)
log
= ?? Ux? ? Ux (y) .
|
{z
}
P (y)
| {z }
Regret of y
(5)
Change of y
Accordingly, the difference in log-probability is proportional to the negative regret [3]. The decisionmaker?s utility function specifies a direction of change relative to the certainty-equivalent, whereas
the strength of the modulation is determined by the inverse temperature.
1
For simplicity, here we consider only strictly positive values for the inverse temperature ?, but its domain
can be extended to negative values to model other effects, e.g. risk-sensitive estimation [9].
2
3
Experimental Methods
We conducted a choice experiment where subjects had to solve puzzles under time pressure. Each
puzzle consisted of Boolean formula in conjunctive normal form (CNF) that was disguised as an
arrangement of circular patterns (see Fig. 1). The task was to find a truth assignment that satisfied
the formula. Subjects could pick an assignment by setting the colors of a central pattern highlighted
in gray. Formally, the puzzles and the assignments corresponded to the stimuli x ? X and the choices
y ? Y respectively, and the duration of the puzzle was the resource parameter that we controlled
(see equation 1).
b)
a)
c)
d)
?
? ?
Figure 1: Example puzzle. a) Each puzzle is a set of six circularly arranged patches containing
patterns of black (?) and white circles (?). In each trial, the positions of the patches were randomly
assigned to one of the six possible locations. Subjects had to choose the three center colors such that
there was at least one (color and position) match for each patch. For instance, the choice in (b) only
matches four out of six patches (in red), while (c) solves the puzzle. The puzzle is a visualization of
the Boolean formula in (d).
We restricted our puzzles to a set of five CNF formulas having 6 clauses, 2 literals per clause, and
3 variables. Subjects were trained only on the first four puzzles, whereas the last one was used as a
control puzzle during the test phase. All the chosen puzzles had a single solution out of the 23 = 8
possible assignments.
We chose CNF formulas because they provide a general2 and flexible platform for testing decisionmaking behavior. Crucially, unlike in an estimation task, finding the relation between a stimulus and
a choice is non-trivial and requires solving a computational problem.
3.1
Data Collection
Two symmetric versions of the experiment were conducted on Amazon Mechanical Turk. For each,
we collected choice data from 15 anonymized participants living in the United States, totaling 30
subjects. Subjects were paid 10 dollars for completing the experiment. The typical runtime of the
experiment ranged between 50 and 130 minutes.
For each subject, we recorded a sequence of 90 training and 285 test trials. The puzzles were displayed throughout the whole trial, during which the subjects could modify their choice at will. The
training trials allowed subjects to familiarize themselves with the task and the stimuli, whereas the
test trials measured their adapted choice behavior as a function of the stimulus and the task duration.
Training trials were presented in blocks of 18 for a long, fixed duration; the test trials, which were
of variable duration, were presented in blocks of 19 (18 regular + 1 control trial). To avoid the collection of poor quality data, subjects had to repeat a block if they failed more than 6 trials within the
same block, thereby setting a performance threshold that was well above chance level. Participants
could initiate a block whenever they felt ready to proceed. Within a block, the inter-trial durations
were drawn uniformly between 0.5 and 1.5s.
Each trial consisted of one puzzle that had to be solved within a limited time. Training trials lasted
10s each, while test trials had durations of 1.25, 2.5, and 5s. Apart from a visual cue shown 1s before
the end of each trial, there was no explicit feedback communicating the trial length. Therefore,
subjects did not know the duration of individual test trials beforehand and thus could not use this
information in their solution strategy. A trial was considered successful only if all the clauses of the
puzzle were satisfied.
2
More precisely, the 2-SAT and SAT problems are NL- and NP-complete respectively. This means that every
other decision problem within the same complexity class can be reduced (i.e. rephrased) as a SAT problem.
3
4
Analysis
The recorded data D consists of a set of tuples (x, r, y), where x ? X is a stimulus, r ? R is a
resource parameter (i.e. duration), and y ? Y a choice. In order to analyze the data, we made the
following assumptions:
1. Transient regime: During the training trials, the subjects converged to a set of subjective
preferences over the choices which depended only on the stimuli.
2. Permanent regime: During the test trials, subjects did not significantly change the preferences that they learned during the training trials. Specifically, choices in the same stimulusduration group were i.i.d. throughout the test phase.
3. Negligible noise: We assumed that the operation of the input device and the cue signaling
the imminent end of the trial did not have a significant impact on the distribution over
choices.
Our analysis only focused only the test trials. Let P (x, r, y) denote the empirical probabilities3 of the
tuples (x, r, y) estimated from the data. From these, we derived the probability distribution P (x, r)
over the stimulus-resource context, the prior P (y) over choices, and the posterior P (y|x, r) over
choices given the context through marginalization and conditioning.
4.1
Inferring Preferences
By fitting the model, we decomposed the choice probabilities into: (a) an inverse temperature function ? : R ? R; and (b) a set of subjective utility functions Ux : Y ? R, one for each stimulus x.
We assumed that the sets X , R, and Y were finite, and we used vector representations for ? and
the Ux . To perform the decomposition, we minimized the average Kullback-Leibler divergence
X
X
P (y|x, r)
J=
P (x, r)
P (y|x, r) log
,
(6)
Q(y|x, r)
x,r
y
w.r.t. the inverse temperatures ?(r) and the utilities Ux (y) through the probabilities Q(y|x, r) of the
choice y given the context (x, r) as derived from the Gibbs distribution
n
o
1
P (y) exp ?(r)Ux (y) ,
(7)
Q(y|x, r) =
Z?
where Z? is the normalizing constant. We used the objective function (6) because it is the Bregman
divergence over the simplex of choice probabilities [1]. Thus, by minimizing the objective function (6) we were seeking a decomposition such that the Shannon information contents of P (y|x, r)
and Q(y|x, r) were matched against each other in expectation.
We minimized (6) using gradient descent. For this, we first rewrote (6) as
J=
X
x,?,y
P (y|x, r)
P (x, r, y) log
? ?(r)Ux (y) + log Z?
P (y)
to expose the coordinates of the exponential manifold and then calculated the gradient. The partial
derivatives of J w.r.t. ?(r) and Ux (y) are equal to
i
X
Xh
?J
=
P (x, r)
Q(y|x, r) ? P (y|x, r) Ux (y)
(8)
??(r)
x,y
y
h
i
X
?J
and
=
P (x, r) Q(y|x, r) ? P (y|x, r) ?(r)
(9)
?Ux (y)
x,y
respectively. The Gibbs distribution (7) admits an infinite number of decompositions, and therefore
we had to fix the scaling factor and the offset to obtain a unique solution. The scale was set by
clamping the value of ?(r0 ) = ?0 for an arbitrarily chosen resource parameter r0 ? R; we used
3
More precisely, P (x, r, y) ? N (x, r, y) + 1, where N (x, r, y) is the count of ocurrences of (x, r, y).
4
?(r0 ) = 1 for r0 = 1s. The offset was fixed by normalizing the utilities. A simple way to achieve
this is by subtracting the certainty-equivalent from the utilities, i.e. for all (x, y),
n
o
X
1
Ux (y) ? Ux (y) ?
log
P (y) exp ?(r0 )Ux (y) .
(10)
?(r0 )
y
Utilities normalized in this way are proportional to the negative regret (see Section 2) and thus have
an intuitive interpretation as modulators of change of the choice distribution.
The resulting decomposition algorithm repeats the following two steps until convergence: first it
updates the inverse temperature and utility functions using gradient descent, i.e.
?(r) ?? ?(r) ? ?t
?J
??(r)
and
Ux (y) ?? Ux (y) ? ?t
?J
?Ux (y)
(11)
for all (r, x, y) ? R ? X ? Y ; and seconds it projects the parameters back onto a standard submanifold by setting r = r0 and normalizing the utilities in each iteration using (10). For P
the learning rate
?P
t > 0, we choose a simple schedule that satisfied the Robbins-Monro conditions
t ?t = ? and
2
?
<
?.
t t
4.2
Expected Utility and Decision Bandwidth
The inferred model is useful for investigating the decision-maker?s performance under different
settings of the resource parameter?in particular, to determine the asymptotic performance limits.
Two quantities are of special interest: the expected utility averaged over the stimuli and the mutual
information between the stimulus and the choice, both as functions of the inverse temperature ?.
Given ?, we define these quantities as
EU? :=
X
P (x)Q? (y|x)Ux (y) and
x,y
I? :=
X
P (x)Q? (y|x) log
x,y
Q? (y|x)
Q? (y)
(12)
respectively. Both definitions are based on the joint distribution P (x)Q? (y|x) in which Q? (y|x) ?
P (y) exp{?Ux (x)} is the Gibbs distribution derived fromP
the prior P (y) and the utility functions
Ux (y). The marginal over choices is given by Q? (y) = x P (x)Q? (y|x). The mutual information I? is a measure of the decision bandwidth, because it quantifies the average amount of information that the subject has to extract from the stimulus in order to produce the choice.
5
5.1
Results
Decomposition into prior, utility, and inverse temperature
For each one of the 30 subjects, we first calculated the empirical choice probabilities and then estimated their decomposition into an inverse temperature ? and utility functions Ux using the procedure
detailed in the previous section. The mean error of the fit was very low (0.0347 ? 0.0024 bits), implying that the choice probabilities are well explained by the model. As an example, Fig. 2 shows
the decomposition for subject 1 (error 0.0469 bits, 83% percentile rank) along with a comparison
between the empirical posterior and the model posterior calculated from the inferred components
using equation (7). As durations become longer and ? increases, the model captures the gradual
shift from the prior towards the optimal choice distribution.
As seen in Fig. 3, the resulting decomposition is stable and shows little variability across subjects.
The stimuli of version B of the experiment differed from version A only in that they were colorinverted, leading to mirror-symmetric decompositions of the prior and the utility functions. The
results suggest the following trends:
? Prior: Compared to the true distribution over solutions, subjects tended to concentrate their
choices slightly more on the most frequent optimal solution (i.e. either y = 2 or y = 7 for
version A or B respectively) and on the all-black or all-white solution (either y = 1 or
y = 8).
5
Posterior
Stimulus
Utility
9/19
x=2
Empirical
Model
Optimum
x=1
3/19
x=4
3/19
x=6
3/19
Inv. Temperature
1/19
Prior
x = 7*
Empirical
True
Time [s]
Choice [id]
Figure 2: Decomposition of subject 1?s posterior choice probabilities. Each row corresponds to a
different puzzle. The left column shows each puzzle?s stimulus and optimal choice. The posterior
distributions P (y|x, ?) were decomposed into a prior P (y); a set of time-dependent inverse temperatures ?r ; and a set of stimulus-dependent utility functions Ux over choices, normalized relative
to the certainty-equivalent (10). The plots compare the subject?s empirical frequencies against the
model fit (in the posterior plots) or against the true optimal choice probabilities (in the prior plot).
The stimuli are shown on the left (more specifically, one out of the 6! arrangement of patches) along
with their probability. Note that the untrained stimulus x = 7 is the color-inverse of x = 2.
? Inverse temperature: The inverse temperature increases monotonically with longer durations, and the dependency is approximately linear in log-time (Fig. 2 and 3).
? Utility functions: In the case of the stimuli that subjects were trained in (namely, x ?
{1, 2, 4, 6}), the maximum subjective utility coincides with the solution of the puzzle. Notice that some choices are enhanced while others are suppressed according to their subjective utility function. Especially the choice for the most frequent stimulus (x = 2) is
suppressed when it is suboptimal. In the case of the untrained stimulus (x = 7), the utility
function is comparatively flat and variable across subjects.
Finally, as a comparison, we also computed the decomposition assuming a Softmax function (or
Boltzmann distribution):
n
o
1
Q(y|x, r) =
exp ?(r)Ux (y) .
(13)
Z?
The mean error of the resulting fit was significantly worse (error 0.0498 ? 0.0032 bits) than the
one based on (7), implying that the inclusion of the prior choice probabilities P (y) improves the
explanation of the choice data.
6
Utility
x=1
x=4
x=6
x = 7*
Prior
Time [s]
Choice [id]
Optimum
Version B
Version A
x=2
Inverse
Temperature
Choice [id]
Figure 3: Summary of inferred preferences across all subjects. The two rows depict the results for
the two versions of the experiment, each one averaged over 15 subjects. The stimuli of both versions
are the same but with their colors inverted, resulting in a mirror symmetry along the vertical axis.
The figure shows the inferred utility functions (normalized to the certainty-equivalent); the inverse
temperatures; and the prior over choices. Optimal choices are highlighted in gray. Error bars denote
one standard deviation.
Expected Utility
Mutual Information
% Correct
100.00
0.652
Subject 1
1.792
0.688
95.68
Average
1.783
Figure 4: Extrapolation of the performance measures. The panels show the expected utility EU? ,
the mutual information I? , and the expected percentage of correct choices as a function of the inverse temperature ?. The top and bottom rows correspond to subject 1 and the averaged subjects
respectively. Each plot shows the performance measure obtained from the empirical choice probabilities (blue markers) and the choice probabilities derived from the model (red curve) together with
the maximum attainable value (dotted red).
5.2
Extrapolation of performance measures
We calculated the expected utility and the mutual information as a function of the inverse temperature using (12). The resulting curves for subject 1 and the average subject are shown in Fig. 4 together
with the predicted percentage of correct choices. All the curves are monotonically increasing and
upper bounded. The expected utility and the percentage of correct choices are concave in the inverse
temperature, indicating marginally diminishing returns with longer durations. Similarly, the mutual
information approaches asymptotically the upper bound set by the stimulus entropy H(X) ? 1.792
bits (excluding the untrained stimulus).
7
6
Discussion and Conclusion
It has long been recognized that the model of perfect rationality does not adequately capture human
decision-making because it neglects the numerous resource limitations that prevent the selection of
the optimal choice [13]. In this work, we considered a model of bounded-rational decision-making
inspired by ideas from statistical mechanics and information-theory. A distinctive feature of this
model is the interplay between the decision-maker?s preferences, a prior distribution over choices,
and a resource parameter. To test the model, we conducted an experiment in which participants had
to solve puzzles under time pressure. The experimental results are very well predicted by the model,
which allows us to draw the following conclusions:
1. Prior: When the decision-making resources decrease, people?s choices fall back on a prior
distribution. This conclusion is supported by two observations. First, the bounded-rational
model explains the gradual shift of the subjects? choice probabilities towards the prior as
the duration of the trial is reduced (e.g. Fig.2). Second, the model fit obtained by the Softmax rule (13), which differs from the bounded rational model (7) only by the lack of a
prior distribution, has a significantly larger error. Thus, our results conflict with the predictions made by models that lack a prior choice distribution?most notably with expected
utility theory [11, 17] and the choice models based on the Softmax function (typical in reinforcement learning, but also in e.g. the logit rule of quantal response equilibria [5] or in
maximum entropy inverse reinforcement learning [18]).
2. Utility and Inverse Temperature: Posterior choice probabilities can be meaningfully parameterized in terms of utilities (which capture the decision-maker?s preferences) and inverse temperatures (which encode resource constraints). This is evidenced by the quality of
the fit and the cogent operational role of the parameters. Utilities are stimulus-contingent
enhancers/inhibitors that act upon the prior choice probabilities, consistent with the role
of utility as a measure of relative desirability in regret theory [3] and also related to the
cognitive functions attributed to the dorsal anterior cingulate cortex [12]. On the other
hand, the inverse temperature captures a determinant factor of choice behavior that is independent of the preferences?mathematically embodied in the low-rank assumption of the
log-likelihood function that we used for the decomposition in the analysis. This assumption does not comply with the necessary conditions for rational meta-reasoning, wherein
decision-makers can utilize the knowledge about their own resources in their strategy [7].
3. Preference Learning: Utilities are learned from experience. As is seen in the utility functions of Fig. 3, subjects did not learn the optimal choice of the untrained stimulus (i.e. x =
7) in spite of being just a simple color-inversion of the most frequent stimulus (i.e. x = 2).
Our experiment did not address the mechanisms that underlie the acquisition of preferences.
However, given that the information necessary to establish a link
between the stimulus and
the optimal choice is below two bits (that is, far below the 32 ? 22 ? 6 = 72 bits necessary
to represent an arbitrary member of the considered class of puzzles), it is likely that the
training phase had subjects synthesize perceptual features that allowed them to efficiently
identify the optimal solution. Other avenues are explored in [14, 15] and references therein.
4. Diminishing returns: The decision-maker?s performance is marginally diminishing in the
amount of resources. This is seen in the concavity of the expected utility curve (Fig. 4;
similarly in the percentage of correct choices) combined with the sub-linear growth of the
inverse temperature as a function of the duration (Fig. 3). For most subjects, the model
predicts a perfectly-rational choice behavior in the limit of unbounded trial duration.
In summary, in this work we have shown empirically that the model of bounded rationality provides
an adequate explanatory framework for resource-constrained decision-making in humans. Using a
challenging cognitive task in which we could control the time available to arrive at a choice, we have
shown that human decision-making can be explained in terms of a trade-off between the gains of
maximizing subjective utilities and the losses due to the deviation from a prior choice distribution.
Acknowledgements
This work was supported by the Office of Naval Research (Grant N000141110744) and the University of Pennsylvania.
8
References
[1] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman Divergences. Journal of
Machine Learning Research, 6:1705?1749, 2005.
[2] D.E. Bell. Regret in decision making under uncertainty. Operations Research, 33:961?981, 1982.
[3] H. Bleichrodt and P. P. Wakker. Regret theory: A bold alternative to the alternatives. The Economic
Journal, 125(583):493?532, 2015.
[4] P.C. Fishburn. The Foundations of Expected Utility. D. Reidel Publishing, Dordrecht, 1982.
[5] J.W. Friedman and C. Mezzetti. Random belief equilibrium in normal form games. Games and Economic
Behavior, 51(2):296?323, 2005.
[6] G. Gigerenzer and R. Selten. Bounded rationality: the adaptive toolbox. MIT Press, Cambridge, MA,
2001.
[7] F. Lieder, D. Plunkett, J. B. Hamrick, S. J. Russell, N. Hay, and T. Griffiths. Algorithm selection by rational metareasoning as a model of human strategy selection. Advances in Neural Information Processing
Systems, pages 2870?2878, 2014.
[8] G. Loomes and R. Sugden. Regret theory: An alternative approach to rational choice under uncertainty.
Economic Journal, 92:805?824, 1982.
[9] P. A. Ortega and D. A. Braun. Thermodynamics as a theory of decision-making with informationprocessing costs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science,
469(2153), 2013.
[10] A. Rubinstein. Modeling bounded rationality. MIT Press, 1998.
[11] L.J. Savage. The Foundations of Statistics. John Wiley and Sons, New York, 1954.
[12] A. Shenhav, M. M. Botvinick, and J. D. Cohen. The expected value of control: an integrative theory of
anterior cingulate cortex function. Neuron, 79:217?240., 2013.
[13] H. Simon. Models of Bounded Rationality. MIT Press, Cambridge, MA, 1984.
[14] N. Srivastava and P. R. Schrater. Rational inference of relative preferences. Advances in neural information processing systems, 2012.
[15] N. Srivastava, E. Vul, and P. R. Schrater. Magnitude-sensitive preference formation. Advances in neural
information processing systems, 2014.
[16] N. Tishby and D. Polani. Information Theory of Decisions and Actions. In Hussain Taylor Vassilis, editor,
Perception-reason-action cycle: Models, algorithms and systems. Springer, Berlin, 2011.
[17] J. Von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University
Press, Princeton, 1944.
[18] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum Entropy Inverse Reinforcement
Learning. In AAAI, pages 1433?1438, 2008.
9
| 6249 |@word trial:24 determinant:1 cingulate:2 version:8 inversion:1 logit:1 integrative:1 gradual:3 crucially:2 decomposition:14 attainable:2 pick:2 pressure:3 paid:1 thereby:1 united:1 n000141110744:1 subjective:8 savage:1 anterior:2 conjunctive:1 john:1 plot:4 update:1 depict:1 implying:2 cue:2 device:1 accordingly:1 record:1 provides:1 location:1 preference:12 five:1 unbounded:1 mathematical:1 along:3 become:1 consists:1 fitting:1 inter:1 notably:1 upenn:2 expected:18 behavior:6 themselves:1 nor:1 mechanic:3 growing:1 inspired:1 decomposed:2 little:1 increasing:4 spain:1 project:1 bounded:18 matched:1 panel:1 morgenstern:1 finding:1 transformation:1 ghosh:1 certainty:7 every:1 act:2 concave:1 growth:1 braun:1 runtime:1 botvinick:1 control:4 underlie:1 grant:1 positive:1 negligible:1 before:1 engineering:1 treat:1 modify:1 limit:4 depended:1 encoding:1 id:3 modulation:2 approximately:1 black:2 chose:1 therein:1 conversely:1 challenging:1 limited:2 averaged:3 unique:1 testing:1 regret:9 block:6 differs:1 signaling:1 procedure:1 empirical:8 metareasoning:1 bell:1 significantly:3 imminent:1 regular:1 griffith:1 spite:1 suggest:1 onto:1 selection:3 risk:1 context:3 equivalent:7 center:1 maximizing:3 lied:1 duration:14 focused:1 formalized:1 simplicity:1 amazon:1 communicating:1 insight:1 rule:3 coordinate:1 enhanced:1 rationality:7 heavily:1 astocker:1 pa:2 trend:1 synthesize:1 predicts:1 bottom:1 role:2 solved:1 capture:5 cycle:1 eu:2 trade:3 decrease:1 russell:1 complexity:1 ziebart:1 trained:2 solving:2 gigerenzer:1 compromise:1 distinctive:1 upon:1 joint:1 plunkett:1 modulators:1 rubinstein:1 corresponded:1 formation:1 disguised:1 dordrecht:1 larger:1 solve:4 ability:1 statistic:1 highlighted:2 deliberation:1 confronted:1 sequence:2 interplay:1 analytical:1 net:1 propose:1 subtracting:1 frequent:3 achieve:1 intuitive:1 decisionmaker:1 validate:1 everyday:1 convergence:1 optimum:2 decisionmaking:1 sea:1 produce:1 neumann:1 perfect:1 measured:1 sa:1 solves:1 predicted:2 rewrote:1 direction:1 concentrate:1 correct:5 human:10 transient:1 explains:1 fix:1 mathematically:1 strictly:1 considered:3 normal:2 exp:6 equilibrium:2 puzzle:23 estimation:2 combinatorial:2 maker:14 expose:1 sensitive:2 robbins:1 mit:3 inhibitor:1 always:1 aim:1 desirability:2 avoid:1 totaling:1 office:1 encode:1 derived:4 naval:1 selten:1 rank:2 likelihood:3 lasted:1 dollar:1 inference:3 dependent:3 explanatory:1 diminishing:3 relation:1 manipulating:1 arg:1 flexible:1 constrained:2 platform:1 special:1 mutual:6 marginal:1 equal:1 softmax:3 having:1 familiarize:1 minimized:2 np:1 stimulus:33 simplex:1 others:1 randomly:1 divergence:4 individual:1 phase:3 friedman:1 interest:1 circular:1 nl:1 stocker:1 beforehand:1 bregman:2 partial:1 necessary:3 experience:2 taylor:1 circle:1 fitted:1 instance:1 column:1 modeling:1 boolean:2 measuring:1 assignment:4 cost:2 deviation:2 submanifold:1 successful:1 conducted:4 tishby:1 dependency:1 combined:1 probabilistic:1 off:3 together:2 von:1 aaai:1 ambiguity:1 satisfied:3 central:1 containing:1 choose:2 recorded:2 fishburn:1 literal:1 worse:1 cognitive:2 derivative:1 leading:1 return:2 account:1 bold:1 permanent:1 extrapolation:2 analyze:1 red:3 bayes:1 participant:3 simon:1 monro:1 merugu:1 efficiently:1 correspond:2 yield:1 identify:1 marginally:2 worth:1 converged:1 tended:1 whenever:1 definition:1 against:4 energy:1 acquisition:1 frequency:1 turk:1 attributed:1 rational:20 gain:1 dataset:1 enhancer:1 knowledge:2 color:6 improves:1 schedule:1 back:3 response:1 wherein:1 arranged:1 dey:1 just:1 until:1 hand:1 banerjee:1 assessment:1 maximizer:2 lack:3 marker:1 quality:2 gray:2 effect:1 consisted:2 ranged:1 normalized:3 true:3 former:1 regularization:1 hence:1 assigned:1 adequately:1 symmetric:2 leibler:1 dhillon:1 white:2 during:5 game:3 percentile:1 coincides:1 ortega:2 complete:1 temperature:22 reasoning:1 functional:1 clause:3 empirically:1 physical:1 cohen:1 conditioning:1 interpretation:1 schrater:2 significant:1 cambridge:2 gibbs:4 similarly:2 inclusion:1 had:10 stable:1 longer:3 cortex:2 posterior:13 own:1 apart:1 hay:1 meta:1 arbitrarily:1 life:1 vul:1 inverted:1 seen:3 contingent:1 r0:7 recognized:1 determine:1 monotonically:2 living:1 reduces:1 alan:1 match:2 hamrick:1 long:2 controlled:1 impact:1 prediction:2 expectation:1 iteration:1 represent:1 whereas:3 unlike:1 posse:1 subject:39 virtually:1 meaningfully:1 member:1 near:1 ideal:1 marginalization:1 fit:6 psychology:2 hussain:1 pennsylvania:3 perfectly:4 bandwidth:2 suboptimal:1 economic:4 idea:3 decisionmakers:1 avenue:1 shift:3 expression:2 motivated:1 six:3 utility:47 effort:1 proceed:1 cnf:3 york:1 action:2 adequate:1 useful:1 detailed:1 amount:3 transforms:1 vassilis:1 reduced:2 specifies:1 percentage:4 notice:1 dotted:1 estimated:2 per:1 blue:1 rephrased:1 group:1 four:2 threshold:1 drawn:1 prevent:1 neither:1 polani:1 utilize:1 asymptotically:1 inverse:24 parameterized:1 uncertainty:2 arrive:1 throughout:2 patch:5 draw:1 decision:35 scaling:1 bit:6 bound:1 completing:1 ope:1 strength:1 adapted:1 constraint:4 precisely:2 flat:1 unlimited:1 felt:1 generates:1 department:2 according:2 combination:1 poor:1 across:3 slightly:1 son:1 suppressed:2 making:14 explained:2 gradually:1 restricted:1 resource:23 equation:2 visualization:1 count:1 mechanism:1 initiate:1 know:1 end:2 available:3 operation:2 cogent:1 apply:1 alternative:4 assumes:1 top:1 clustering:1 publishing:1 neglect:1 especially:1 establish:1 classical:1 comparatively:1 society:1 seeking:1 objective:3 arrangement:2 quantity:2 strategy:3 bagnell:1 gradient:3 link:1 berlin:1 capacity:1 manifold:1 collected:1 trivial:1 reason:2 assuming:2 length:1 modeled:1 quantal:1 minimizing:3 negative:3 reidel:1 boltzmann:1 perform:1 upper:2 vertical:1 observation:1 neuron:1 finite:2 descent:2 displayed:1 extended:1 variability:1 excluding:1 arbitrary:1 inv:1 inferred:4 evidenced:1 namely:1 mechanical:1 kl:1 toolbox:1 connection:2 conflict:1 learned:4 barcelona:1 maxq:1 nip:1 address:1 able:1 suggested:1 bar:1 below:2 pattern:6 perception:1 regime:2 max:1 royal:1 explanation:1 belief:1 fromp:1 regularized:2 scheme:1 thermodynamics:1 numerous:1 axis:1 ready:1 extract:1 philadelphia:2 embodied:1 deviate:1 prior:26 comply:1 acknowledgement:1 relative:4 asymptotic:1 loss:1 highlight:1 interesting:1 limitation:6 proportional:2 foundation:2 anonymized:1 consistent:1 editor:1 systematically:2 row:3 summary:2 maas:1 repeat:2 last:1 free:1 supported:2 fall:2 feedback:1 calculated:4 curve:4 concavity:1 made:4 qualitatively:1 collection:2 reinforcement:3 adaptive:1 far:1 ignore:1 kullback:1 confirm:1 investigating:1 sat:3 assumed:2 tuples:2 quantifies:2 learn:1 controllable:1 operational:1 obtaining:1 symmetry:1 untrained:4 wakker:1 domain:1 did:5 whole:1 noise:1 allowed:2 fig:9 differed:1 wiley:1 formalization:1 sub:1 position:2 inferring:1 explicit:1 xh:1 exponential:1 perceptual:2 formula:5 minute:1 offset:2 explored:1 admits:1 normalizing:4 circularly:1 mirror:2 magnitude:1 clamping:1 entropy:3 likely:1 visual:1 failed:1 ux:28 springer:1 pedro:1 corresponds:1 truth:1 determines:1 chance:1 ma:2 towards:2 content:1 experimentally:1 change:4 determined:4 specifically:3 typical:2 uniformly:1 infinite:1 experimental:2 shannon:1 indicating:1 formally:2 allotted:1 people:1 latter:1 dorsal:1 princeton:2 tested:2 srivastava:2 |
5,802 | 625 | Visual Motion Computation in Analog
VLSI using Pulses
Rahul Sarpeshkar, Wyeth Bair and Christof Koch
Computation and Neural Systems Program
California Institute of Technology
Pasadena, CA 91125.
Abstract
The real time computation of motion from real images
using a single chip with integrated sensors is a hard problem. We present two analog VLSI schemes that use pulse
domain neuromorphic circuits to compute motion. Pulses
of variable width, rather than graded potentials, represent
a natural medium for evaluating temporal relationships.
Both algorithms measure speed by timing a moving edge
in the image. Our first model is inspired by Reichardt's
algorithm in the fiy and yields a non-monotonic response
vs. velocity curve. We present data from a chip that
implements this model. Our second algorithm yields a
monotonic response vs. velocity curve and is currently
being translated into silicon.
1
Introd uction
Analog VLSI chips for the real time computation of visual motion have been the
focus of much active research because of their importance as sensors for robotic
applications. Correlation schemes such as those described in (Delbriick, 1993)
have been found to be more robust than gradient schemes described in (Tanner and
Mead, 1986), because they do not involve noise-sensitive operations like spatialdifferentiation and division. A comparison of four experimental schemes may be
found in (Horiuchi et al., 1992). In spite of years of work, however, there is still no
motion chip that robustly computes motion under all environmental conditions.
781
782
Sarpeshkar, Bair, and Koch
Motion algorithms operating on higher level percepts in an image such as zerocrossings (edges) are more robust than those that operating on lower level percepts
in an image such as raw image intensity values (Marr and Ullman, 1981). Our work
demonstrates how, if the edges ill an image are identified, it is possible to compute
motion, quickly and easily, by using pulses. We compute the velocity at each point
in the image. The estimation of the flow-field is of tremendous importance in
computations such as time-to-contact, figure-ground-segregation and depth-frommotion. Our motion scheme is well-suited to typical indoor environments that tend
to have a lot of high-contrast edges. The much harder problem of computing motion
in low-contrast, high-noise outdoor environments still remains unsolved.
We present two motion algorithms. Our first algorithm is a "delay-and-correlate"
scheme operating on spatial edge features and is inspired by work on fly vision
(Hassenstein and Reichardt, 1956). It yields a non-monotonic response vs. velocity
curve. We present data from a chip that implements it. Our second algorithm is
a "facilitate-and-trigger" scheme operating on temporal edge features and yields a
monotonic response vs. velocity curve. Work is under way to implement our second
algorithm in analog VLSI.
2
The Delay-and-Correlate Scheme
Conceptually, there are two stages of computation. First, the zero-crossings in
the image are computed and then the motion of these zero-crossings is detected.
The zero-crossing circuitry has been described in (Bair and Koch, 1991). We
concentrate on describing the motion circuitry.
A schematized version of the chip is shown in Figure 1a. Only four photo receptors in
the array are shown. The 1-D image from the array of photoreceptors is filtered with
a spatial bandpass filter whose kernel is composed of a difference of two exponentials
(implemented with resistive grids). The outputs of the bandpass filter feed into
edge detection circuitry that output a bit indicating the presence or absence of
an edge between two adjacent pixels. The edges on the chip are separated into
two polarities, namely, right-side-bright (R) and left-side-brigbt (L), which are kept
separate throughout the chip, including the motion circuitry. For comparison, in
biology, edges are often separated into light-on edges and light-off edges. The motion
circuits are sensitive only to the motion of those edges from which they receive
inputs. They detect the motion of a zero-crossing from one location to an adjacent
location using a Reichardt scheme as shown in Figure lb. Each motion detecting
unit receives two zero-crossing inputs ZCn and ZCn+21. The ON-cells detect the
onset of zero-crossings (a rising voltage edge) by firing a pulse. The units marked
with D's delay these pulses by an amount D, controlled externally. The correlation
units marked with X's logically AND a delayed version of a pulse from one location
with an undelayed version of a pulse from the adjacent location. The output from
the left correlator is sensitive to motion from location n + 2 to location n since the
motion delay is compensated by the built-in circuit delay. The outputs of the two
Figure 2a shows the
correlators are subtracted to yield the final motion signal.
circuit details. The boxes labelled with pulse symbols represent axon circuits. The
1 ZC n + 1 could have been used as well. ZC n +2 was chosen due to wiring constraints and
because it increases the baseline distance for computing motion.
Visual Motion Computation in Analog VLSI using Pulses
B
A
PR3
BANDPASS SPATIAL FILTER
M
M
+
Mt~
Figure l-(A) The bandpass filtered photoreceptor signal is fed to the edge
detectors marked with E's. The motion of these edges is detected by the
motion detecting units marked with M's. (B) A single motion detecting unit,
corresponding to a "M" unit in fig. A, has a Reichardt-like architecture.
axon circuits generate a single pulse of externally controlled width, P, in response
to a sharp positive transition in their input, but remain inactive in response to a
negative transition. In order to generate pulses that are delayed from the onset
of a zero-crossing, the output of one axon circuit, with pulse width parameter D,
is coupled via an inverter to the input of another axon circuit, with pulse width
parameter P. The multiplication operation is implemented by a simple logical AND.
The subtraction operation is implemented by a subtraction of two currents. An offchip sense amplifier converts the bidirectional current pulse outputs of the local
motion detectors into active-low or active-high voltage pulses. The axon circuit is
shown in Figure 2b. Further details of its operation may be found in (Sarpeshkar
et al., 1992).
Figure 3a shows how the velocity tuning curve is obtained. If the image velocity, v,
is positive, and ~x is the distance between adjacent zero-crossing locations, then
it can be shown that the output pulse width for the positive-velocity half of the
motion detector, tp, is
(1)
tp = u8(u),
783
784
Sarpeshkar, Bair, and Koch
where
Ax
u=P-I--DI,
v
(2)
and e(~) is the unit step function. If v is negative, the same eqns. apply for the
negative-velocity half of the motion detector except that the signs of Ax and tp are
reversed.
A
I-----+C
VREF
Figure 2-(A) The circuitry implementing the Reichardt scheme of Figure lb,
is shown. The boxes labelled P and D represent axon circuits of pulse width
parameter P and D, respectively. (B) The circuit details of an axon circuit
that implements an ON-cell of Figure lb are shown. The input and output
are 'Vin and Vout respectively. The circuit was designed to mimic the behavior
of sodium and leak conductances at the node of Ranvier in an axon fiber. The
pulse width of the output pulse, the refractory period following its generation,
and the threshold height of the input edge needed to trigger the pulse are
determined by the values of bias voltages VD, VR and VL respectively.
Experimental Data
Figure 4a shows the outputs of motion detectors between zero-crossings 3 and 5, 7
and 9, and 11 and 13, denoted as Mt3, Mh, and Mtll, respectively. For an edge
passing from left to right, the outputs Mtll, Mt7 and Mt3 are excited in this order,
and they each report a positive velocity (active high output that is above VREF)'
For an edge passing from right to left, the outputs Mt 3, Mt7 and Mtll are excited in
this order and they each report a negative velocity (active low output that is below
VREF)' Note that the amplitudes of these pulses contain no speed information and
only signal the direction of motion. Figure 4b shows that the output M t3 is tuned
to a particular velocity. As the rotational frequency of a cylinder with a painted
edge is decreased from a velocity corresponding to a motor voltage of - 6.1 V to a
velocity corresponding to a motor voltage of -1.3V, the output pulse width increases,
then decreases again, as the optimal velocity is traversed through. A similar tuning
curve is observed for positive motor voltages.
If the distance from the surface of
the spinning cylinder to the center of the lens is 0, the distance from the center of
the lens to the chip is i, the radius of the spinning cylinder is R, and its frequency
Visual Motion Computation in Analog VLSI using Pulses
(
P
Delayed
Pulse
D
Time
8
6
(I)
~
.=::
~x
v
Undelayed
Pulse
4
~~------~--~------~
Time
~~
2
a
-2
5. -4
~ -6
Motion Delay o
-8
"0
C~X)
~
~
~-10~-+--~--~-+--+-~
.-
b.O
r.n
-100
a
100
Veloci ty (degj sec)
Figure 3-(a) The figure shows how the overlap between the delayed and undelayed pulses gives rise to velocity tuning for motion in the preferred direction.
(b) The figure shows experimental data (circles) and a theoretical fit (line) for
the motion unit Mt3 's output response vs. angular velocity.
of rotation is
by
f, then the velocity,
v, of the moving edge as seen by the chip is given
z
v=27rfR-.
o
(3)
The angular velocity, w, of the moving edge, is
w=
27rf R
v
- "7.
o
Z
(4)
Figure 3b shows the output pulse width of M t3 plotted against the angular velocity
of the edge (w). The data are fit by a curve that com pu tes tp vs. w from (1)- ( 4),
using measured values of ~x = 180 pm, 0 = 310 mm, i = 17 mm, and R = 58 mm.
785
786
Sarpeshkar, Bair, and Koch
A
?
p
,
t
?
,.
".n..,n
.
?
?
II
:, ,
It: ~
"u
.....
;'
~..
;1
..' Mtll
M t7
: :.n:;, :, ;,! Mta
Mta
? ", . , . I
>
.-0
""0
111 . .
"-
>
lr.l
N
............,
Mt7
lJ~'"
,
Itt
Mtll
?
.....
~
:
.,;>
;::j
0.-
?
,
.,;>
0
Time (Sec)
1
?
,I
U'
i
I
.,;>
::1
0.0 0.05 0.1 0.15 0.20
11. . :iJ.,Q
I
n ? ~
+1.5
n
..?1.a,
jolll
b.O
.
+5.0
(l)
C1:'l
"~
t , II' e . . . . .
n
,.--....
t ?
B
D
-1.3
-1.5
t:
-6.I
0.0 1.0 2.0 3.0 4.0
Time (10- 2 Sec)
Figure 4-(A) The chip output waveforms are active-high when the motion is
in a direction such that motion detectors 11, 7 and 3 are stimulated in that
order. In the opposite direction (3 - 7 - 11), the output waveforms are activelow. (B) The figure demonstrates the tuned velocity behavior of the motion
unit Mt3 for various motor voltages (in V). Large positive voltages correspond
to fast motion in one direction and large negative voltages correspond to fast
motion in the opposite direction.
3
The Facilitate-and-Trigger Scheme
Although the delay-and-correlate scheme works well, the output yields ambiguous
motion information due to its non-monotonic velocity dependence, i.e., we don't
know if the output is small because the velocity is too slow or because it is too
fast. This problem may be solved by aggregating the outputs of a series of motion
detectors with overlapping tuning curves and progressively larger optimal velocities.
This solution is plausible in physiology but expensive in VLSI. We were thus motivated to create a new motion detector that possessed a monotonic velocity tuning
curve from the outset.
Figure Sa shows the architecture of such a motion detecting unit: The need for
computing spatial edges is obviated by having a photoreceptor sensitive to temporal
features caused by moving edges, i.e., sharp light onsets/offsets (Delbriick, 1993).
Each ON-cell fires a pulse in response to the onset of a temporal edge. The pulses
are fed to the facilitatory (F) and trigger (T) inputs of motion detectors tuned for
motion in the left or right directions. The facilitatory pulse defines a time window
of externally controlled width F, within which the output motion pulse may be
activated by the trigger pulse. Thus, the rising edge of the trigger pulse must occur
within the time window set by the facilitatory pulse in order to create a motion
Visual Motion Computation in Analog VLSI using Pulses
output. If this condition is satisfied, the output motion pulse is triggered (begins)
at the start of the trigger pulse and ends at the end of the facilitatory pulse; its
width, thus, encodes the arrival time difference between the onset pulses at adjacent
locations due to the motion delay. Each half of the motion detector only responds
to motion in the direction that corresponds to F before T. Figure 5c shows that the
velocity tuning in this scheme is monotonic. It can be shown that the output pulse
width for the positive half of the motion detector, tp , is given by
tp=u8(u),
(5)
where u is given by,
Ax
u = F- - .
(6)
v
The dead zone near the origin may be made as small as needed by increasing the
width of the facilitatory pulse.
Figure 5b shows a compact circuit that
PR n A
C
PR n +1
Q)
~
F
Facilitation
Pulse
~
.......
~
Time
Trigger
Q)
~
~
.......
Pulse
~
Time
..c:::
~
0.."'0
ro?~~
~Q)
O~
;:1
~
~
~
F Motion Delay (~X)
..c:::
~
~
0.."'0
ro?~~
~
~F
Q)
0,.$
~
Velocity (v)
Figure 5-(A) The motion detecting unit uses a facilitate-and-trigger paradigm
instead of a delay-and-correlate paradigm as a basis for its operation. (B) A
compact circuit implements the motion detecting unit. VT is the trigger input,
VF is the facilitatory input, Vout is the output and VTH is a bias voltage. (C)
The output response has a monotonic dependence on velocity.
implements the facilitate-and-trigger scheme. The ON-cell, subtraction and senseamplifier circuitry are implemented as in the Reichardt scheme.
787
788
Sarpeshkar, Bair, and Koch
The facilitate-and-trigger approach requires a total of 32 transistors per motion
unit compared with a total of 64 in the delay-and-correlate approach, needs one
parameter to be controlled (F) rather than two (D and P), and yields monotonic
tuning from the outset. We, therefore, believe that it will prove to be the superior
of the two approaches.
4
Conclusions
The evaluations of onsets, delays and coincidences, required for computing motion
are implemented very naturally with pulses rather than by graded potentials as in
all other motion chips, built so far. Both of our motion algorithms time the motion
of image features in an efficient fashion by using pulse computation.
Acknowledgements
Many thanks to Carver Mead for his encouragement, support and use of lab facilities. We acknowledge useful discussions with William Bialek, Nicola Franceschini
and Tobias Delbriick. This work was supported by grants from the Office of Naval
Research and the California Competitive Technologies Program. Chip fabrication
was provided by MOSIS.
References
W. Bair, and C. Koch, "An Analog VLSI Chip for Finding Edges from Zerocrossings.", In Advances in Neural Information Processing Systems Vol. 3, R. Lippman, J. Moody, D. Touretzky, eds., pp. 399-405, Morgan Kaufmann, San Mateo,
CA,199l.
T. Delbriick, "Investigations of Analog VLSI Visual Transduction and Motion
Processing", PhD. thesis, Computation and Neural Systems Program, Caltech,
Pasadena, CA, 1993.
B. Hassenstein and W. Reichardt, "Systemtheoretische Analyse der Zeit, Reihenfolgen, und Vorzeichenauswertung bei der Bewegungsperzepion des Riisselkafers
Chlorophanus", Z. Naturforsch.11b: pp. 513-524, 1956.
T. Horiuchi, W. Bair, A. Moore, B. Bishofberger, J. Lazzaro, C. Koch, "Computing
Motion Using Analog VLSI Vision Chips: an Experimental Comparison among
Different Approaches", IntI. Journal of Computer Vision, 8, pp. 203-216, 1992.
D. Marr and S. Ullman, "Directional Selectivity and its Use in Early Visual Processing", Proc. R. Soc. Lond B 211, pp. 151-180, 1981.
R. Sarpeshkar, L. Watts, C. Mead, "Refractory Neuron Circuits", Internal Lab
Memorandum, Physics of Computation Laboratory, Pasadena, CA, 1992.
J. Tanner and C. Mead, "An Integrated Optical Motion Sensor", VLSI Signal Processing II, S-Y Kung, R.E. Owen, and J.G. Nash, eds., 59-76, IEEE Press, NY,
1986.
| 625 |@word version:3 rising:2 pulse:46 excited:2 harder:1 series:1 t7:1 tuned:3 zerocrossings:2 current:2 com:1 must:1 motor:4 designed:1 progressively:1 v:6 half:4 lr:1 filtered:2 detecting:6 node:1 location:8 height:1 prove:1 resistive:1 behavior:2 inspired:2 correlator:1 window:2 increasing:1 begin:1 provided:1 circuit:16 medium:1 vref:3 finding:1 temporal:4 ro:2 demonstrates:2 unit:13 grant:1 christof:1 positive:7 before:1 timing:1 local:1 aggregating:1 receptor:1 painted:1 mead:4 firing:1 mateo:1 fiy:1 implement:6 lippman:1 physiology:1 outset:2 spite:1 compensated:1 center:2 array:2 marr:2 facilitation:1 his:1 memorandum:1 trigger:12 us:1 origin:1 velocity:28 crossing:9 expensive:1 observed:1 fly:1 coincidence:1 solved:1 decrease:1 environment:2 leak:1 und:1 nash:1 tobias:1 division:1 basis:1 translated:1 easily:1 mh:1 chip:15 sarpeshkar:7 fiber:1 various:1 separated:2 horiuchi:2 fast:3 detected:2 whose:1 larger:1 plausible:1 analyse:1 final:1 triggered:1 transistor:1 measured:1 ij:1 offchip:1 sa:1 soc:1 implemented:5 concentrate:1 direction:8 radius:1 waveform:2 filter:3 implementing:1 investigation:1 traversed:1 mm:3 koch:8 ground:1 circuitry:6 inverter:1 early:1 estimation:1 proc:1 currently:1 sensitive:4 create:2 sensor:3 rather:3 schematized:1 voltage:10 office:1 ax:3 focus:1 naval:1 logically:1 contrast:2 baseline:1 detect:2 sense:1 vl:1 integrated:2 lj:1 pasadena:3 vlsi:12 pixel:1 among:1 ill:1 denoted:1 spatial:4 field:1 having:1 biology:1 mimic:1 report:2 composed:1 delayed:4 fire:1 william:1 amplifier:1 detection:1 conductance:1 cylinder:3 evaluation:1 light:3 activated:1 edge:28 carver:1 circle:1 plotted:1 theoretical:1 tp:6 neuromorphic:1 delay:12 fabrication:1 too:2 thanks:1 off:1 physic:1 tanner:2 quickly:1 moody:1 again:1 thesis:1 satisfied:1 dead:1 ullman:2 potential:2 de:1 sec:3 caused:1 onset:6 lot:1 lab:2 franceschini:1 wyeth:1 start:1 competitive:1 vin:1 zeit:1 bright:1 kaufmann:1 percept:2 yield:7 t3:2 correspond:2 directional:1 conceptually:1 vout:2 raw:1 detector:11 touretzky:1 ed:2 against:1 ty:1 frequency:2 pp:4 naturally:1 di:1 unsolved:1 logical:1 amplitude:1 zcn:2 feed:1 bidirectional:1 higher:1 response:9 rahul:1 box:2 angular:3 stage:1 correlation:2 receives:1 overlapping:1 defines:1 believe:1 facilitate:5 contain:1 facility:1 moore:1 laboratory:1 adjacent:5 wiring:1 width:13 eqns:1 ambiguous:1 motion:65 image:11 superior:1 rotation:1 mt:2 refractory:2 analog:10 silicon:1 encouragement:1 tuning:7 grid:1 pm:1 moving:4 operating:4 surface:1 pu:1 selectivity:1 vt:1 der:2 caltech:1 seen:1 morgan:1 subtraction:3 paradigm:2 period:1 signal:4 ii:3 controlled:4 vision:3 represent:3 kernel:1 cell:4 c1:1 receive:1 decreased:1 tend:1 flow:1 correlators:1 near:1 presence:1 fit:2 architecture:2 identified:1 opposite:2 bishofberger:1 inactive:1 bair:8 motivated:1 introd:1 passing:2 lazzaro:1 useful:1 involve:1 amount:1 generate:2 sign:1 per:1 vol:1 four:2 threshold:1 kept:1 mosis:1 year:1 convert:1 throughout:1 vf:1 bit:1 frommotion:1 occur:1 constraint:1 encodes:1 facilitatory:6 speed:2 lond:1 degj:1 optical:1 mta:2 watt:1 remain:1 pr:2 inti:1 segregation:1 remains:1 describing:1 needed:2 know:1 fed:2 end:2 photo:1 operation:5 apply:1 robustly:1 subtracted:1 obviated:1 graded:2 nicola:1 contact:1 dependence:2 responds:1 bialek:1 gradient:1 distance:4 separate:1 reversed:1 vd:1 spinning:2 relationship:1 polarity:1 rotational:1 negative:5 rise:1 neuron:1 acknowledge:1 possessed:1 delbriick:4 lb:3 sharp:2 intensity:1 namely:1 required:1 california:2 tremendous:1 below:1 indoor:1 program:3 built:2 including:1 rf:1 overlap:1 natural:1 sodium:1 scheme:15 technology:2 ranvier:1 coupled:1 reichardt:7 acknowledgement:1 multiplication:1 generation:1 hassenstein:2 supported:1 undelayed:3 zc:2 side:2 bias:2 institute:1 curve:9 depth:1 evaluating:1 transition:2 computes:1 made:1 san:1 far:1 correlate:5 compact:2 preferred:1 active:6 robotic:1 photoreceptors:1 don:1 naturforsch:1 stimulated:1 mtll:5 itt:1 robust:2 ca:4 domain:1 noise:2 arrival:1 fig:1 fashion:1 transduction:1 slow:1 axon:8 vr:1 ny:1 bandpass:4 exponential:1 outdoor:1 bei:1 externally:3 symbol:1 offset:1 uction:1 importance:2 phd:1 te:1 suited:1 visual:7 vth:1 monotonic:9 corresponds:1 environmental:1 marked:4 u8:2 labelled:2 owen:1 absence:1 hard:1 typical:1 except:1 determined:1 lens:2 total:2 experimental:4 photoreceptor:2 indicating:1 zone:1 internal:1 support:1 kung:1 |
5,803 | 6,250 | Pruning Random Forests for Prediction on a Budget
Feng Nan
Systems Engineering
Boston University
[email protected]
Joseph Wang
Electrical Engineering
Boston University
[email protected]
Venkatesh Saligrama
Electrical Engineering
Boston University
[email protected]
Abstract
We propose to prune a random forest (RF) for resource-constrained prediction. We
first construct a RF and then prune it to optimize expected feature cost & accuracy.
We pose pruning RFs as a novel 0-1 integer program with linear constraints that
encourages feature re-use. We establish total unimodularity of the constraint set
to prove that the corresponding LP relaxation solves the original integer program.
We then exploit connections to combinatorial optimization and develop an efficient
primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up
approach, which benefits from good RF initialization, conventional methods are
top-down acquiring features based on their utility value and is generally intractable,
requiring heuristics. Empirically, our pruning algorithm outperforms existing
state-of-the-art resource-constrained algorithms.
1
Introduction
Many modern classification systems, including internet applications (such as web-search engines,
recommendation systems, and spam filtering) and security & surveillance applications (such as widearea surveillance and classification on large video corpora), face the challenge of prediction-time
budget constraints [21]. Prediction-time budgets can arise due to monetary costs associated with
acquiring information or computation time (or delay) involved in extracting features and running the
algorithm. We seek to learn a classifier by training on fully annotated training datasets that maintains
high-accuracy while meeting average resource constraints during prediction-time. We consider a
system that adaptively acquires features as needed depending on the instance(example) for high
classification accuracy with reduced feature acquisition cost.
We propose a two-stage algorithm. In the first stage, we train a random forest (RF) of trees using
an impurity function such as entropy or more specialized cost-adaptive impurity [16]. Our second
stage takes a RF as input and attempts to jointly prune each tree in the forest to meet global resource
constraints. During prediction-time, an example is routed through all the trees in the ensemble to the
corresponding leaf nodes and the final prediction is based on a majority vote. The total feature cost
for a test example is the sum of acquisition costs of unique features1 acquired for the example in the
entire ensemble of trees in the forest. 2
We derive an efficient scheme to learn a globally optimal pruning of a RF minimizing the
empirical error and incurred average costs. We formulate the pruning problem as a 0-1 integer linear program that incorporates feature-reuse constraints. By establishing total unimodularity of the constraint set, we show that solving the linear program relaxation of the integer program yields the optimal solution to the integer program resulting in a polynomial
1
When an example arrives at an internal node, the feature associated with the node is used to direct the
example. If the feature has never been acquired for the example an acquisition cost is incurred. Otherwise, no
acquisition cost is incurred as we assume that feature values are stored once computed.
2
For time-sensitive cases such as web-search we parallelize the implementation by creating parallel jobs
across all features and trees. We can then terminate jobs based on what features are returned.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
time algorithm for optimal pruning. We develop a primal-dual algorithm by leveraging results from network-flow theory for scaling the linear program to large datasets. Empirically,
this pruning outperforms state-of-the-art resource efficient algorithms on benchmarked datasets.
Our approach is motivated by the folNo Usage 1?7
> 7 Cost Error
lowing considerations:
(i) RFs are scalable to large datasets
Unpruned RF
7.3%
91.7% 1% 42.0 6.6%
and produce flexible decision boundBudgetPrune
68.3%
31.5% 0.2% 24.3 6.7%
aries yielding high prediction-time accuracy. The sequential feature usage Table 1: Typical feature usage in a 40 tree RF before and after
of decision trees lends itself to adap- pruning (our algorithm) on the MiniBooNE dataset. Columns 2-4
tive feature acquisition. (ii) RF fea- list percentage of test examples that do not use the feature, use it
1 to 7 times, and use it greater than 7 times, respectively. Before
ture usage is superfluous, utilizing fea- pruning, 91% examples use the feature only a few (1 to 7) times,
tures with introduced randomness to paying a significant cost for its acquisition; after pruning, 68% of
increase diversity and generalization. the total examples no longer use this feature, reducing cost with
Pruning can yield significant cost re- minimal error increase. Column 5 is the average feature cost (the
duction with negligible performance average number of unique features used by test examples). Column
loss by selectively pruning features 6 is the test error of RFs. Overall, pruning dramatically reduces
sparsely used across trees, leading to average feature cost while maintaining the same error level.
cost reduction with minimal accuracy
degradation (due to majority vote). See Table 1. (iii) Optimal pruning encourages examples to use
features either a large number of times, allowing for complex decision boundaries in the space of
those features, or not to use them at all, avoiding incurring the cost of acquisition. It enforces the
fact that once a feature is acquired for an example, repeated use incurs no additional acquisition cost.
Intuitively, features should be repeatedly used to increase discriminative ability without incurring
further cost. (iv) Resource constrained prediction has been conventionally viewed as a top-down
(tree-growing) approach, wherein new features are acquired based on their utility value. This is often
an intractable problem with combinatorial (feature subsets) and continuous components (classifiers)
requiring several relaxations and heuristics. In contrast, ours is a bottom-up approach that starts with
good initialization (RF) and prunes to realize optimal cost-accuracy tradeoff. Indeed, while we do not
pursue it, our approach can also be used in conjunction with existing approaches.
Related Work: Learning decision rules to minimize error subject to a budget constraint during
prediction-time is an area of recent interest, with many approaches proposed to solve the predictiontime budget constrained problem [9, 22, 19, 20, 12]. These approaches focus on learning complex
adaptive decision functions and can be viewed as orthogonal to our work. Conceptually, these are
top-down ?growing? methods as we described earlier (see (iv)). Our approach is bottom-up that seeks
to prune complex classifiers to tradeoff cost vs. accuracy.
Our work is based on RF classifiers [3]. Traditionally, feature cost is not incorporated when constructing RFs, however recent work has involved approximation of budget constraints to learn budgeted
RFs [16]. The tree-growing algorithm in [16] does not take feature re-use into account. Rather
than attempting to approximate the budget constraint during tree construction, our work focuses on
pruning ensembles of trees subject to a budget constraint. Methods such as traditional ensemble
learning and budgeted random forests can be viewed as complementary.
Decision tree pruning has been studied extensively to improve generalization performance, we are not
aware of any existing pruning method that takes into account the feature costs. A popular method for
pruning to reduce generalization error is Cost-Complexity Pruning (CCP), introduced by Breiman et
al. [4]. CCP trades-off classification ability for tree size, however it does not account for feature costs.
As pointed out by Li et al. [15], CCP has undesirable ?jumps" in the sequence of pruned tree sizes.
To alleviate this, they proposed a Dynamic-Program-based Pruning (DPP) method for binary trees.
The DPP algorithm is able to obtain optimally pruned trees of all sizes; however, it faces the curse
of dimensionality when pruning an ensemble of decision trees and taking feature cost into account.
[23, 18] proposed to solve the pruning problem as a 0-1 integer program; again, their formulations
do not account for feature costs that we focus on in this paper. The coupling nature of feature usage
makes our problem much harder. In general pruning RFs is not a focus of attention as it is assumed
that overfitting can be avoided by constructing an ensemble of trees. While this is true, it often leads
to extremely large prediction-time costs. Kulkarni and Sinha [11] provide a survey of methods to
prune RFs in order to reduce ensemble size. However, these methods do not explicitly account for
feature costs.
2
2
Learning with Resource Constraints
In this paper, we consider solving the Lagrangian relaxed problem of learning under prediction-time
resource constraints, also known as the error-cost tradeoff problem:
min E(x,y)?P [err (y, f (x))] + ?Ex?Px [C (f, x)] ,
f ?F
(1)
where example/label pairs (x, y) are drawn from a distribution P; err(y, y?) is the error function;
C(f, x) is the cost of evaluating the classifier f on example x; ? is a tradeoff parameter. A larger ?
places a larger penalty on cost, pushing the classifier to have smaller cost. By adjusting ? we can
obtain a classifier satisfying the budget constraint. The family of classifiers F in our setting is the
space of RFs, and each RF f is composed of T decision trees T1 , . . . , TT .
Our approach: Rather than attempting to construct the optimal ensemble by solving Eqn. (1)
directly, we instead propose a two-step algorithm that first constructs an ensemble with low prediction
error, then prunes it by solving Eqn. (1) to produce a pruned ensemble given the input ensemble. By
adopting this two-step strategy, we obtain an ensemble with low expected cost while simultaneously
preserving the low prediction error.
There are many existing methods to construct RFs, however the focus of this paper is on the second
step, where we propose a novel approach to prune RFs to solve the tradeoff problem Eqn.(1). Our
pruning algorithm is capable of taking any RF as input, offering the flexibility to incorporate any
state-of-the-art RF algorithm.
3
Pruning with Costs
In this section, we treat the error-cost tradeoff problem Eqn. (1) as an RF pruning problem. Our key
contribution is to formulate pruning as a 0-1 integer program with totally unimodular constraints.
We first define notations used throughout the paper. A training sample S = {(x(i) , y (i) ) :
i = 1, . . . , N } is generated i.i.d. from an unknown distribution, where x(i) ? <K is the feature
vector with a cost assigned to each of the K features and y (i) is the label for the ith example. In
the case of multi-class classification y ? {1, . . . , M }, where M is the number of classes. Given a
decision tree T , we index the nodes as h ? {1, . . . , |T |}, where node 1 represents the root node. Let
T? denote the set of leaf nodes of tree T . Finally, the corresponding definitions for T can be extended
to an ensemble of T decision trees {Tt : t = 1, . . . , T } by adding a subscript t.
Pruning Parametrization: In order to model ensemble pruning as an optimization problem, we
parametrize the space of all prunings of an ensemble. The process of pruning a decision tree T at
an internal node h involves collapsing the subtree of T rooted at h, making h a leaf node. We say
a pruned tree T (p) is a valid pruned tree of T if (1) T (p) is a subtree of T containing root node 1
and (2) for any h 6= 1 contained in T (p) , the sibling nodes (the set of nodes that share the same
immediate parent node as h in T ) must also be contained in T (p) . Specifying a pruning is equivalent
to specifying the nodes that are leaves in the pruned tree. We therefore introduce the following binary
variable for each node h ? T
1 if node h is a leaf in the pruned tree,
zh =
0 otherwise.
We call the set {zh , ?h ? T } the node variables as they are associated with each node in the tree.
Consider any root-to-leaf path in a tree T , there should be exactly one node in the path that is a leaf
node in the pruned tree. Let p(h) denote the set of predecessor nodes, the set of nodes (including h)
that lie on the path from the root node to h. The set of valid pruned
P trees can be represented as the
set of node variables satisfying the following set of constraints: u?p(h) zu = 1 ?h ? T? . Given a
valid pruning for a tree, we now seek to parameterize the error of the pruning.
Pruning error: As in most supervised empirical risk minimization problems, we aim to minimize
the error on training data as a surrogate to minimizing the expected error. In a decision tree T , each
node h is associated with a predicted label corresponding to the majority label among the training
examples that fall into the node h. Let Sh denote the subset of examples in S routed to or through
node h on T and let Predh denote the predicted label at h. The number of misclassified examples
3
P
1[y(i) 6=Predh ] . We can thus estimate the error of tree T in terms of the
P
number of misclassified examples in the leaf nodes: N1 h?T? eh , where N = |S| is the total number
of examples.
at h is therefore eh =
i?Sh
Our goal is to minimize the expected test error of the trees in the random forest, which we
empirically approximate based on the aggregated probability distribution in Step (6) of AlgoPT P
rithm 1 with T1N t=1 h?T?t eh . We can express this error in terms of the node variables:
PT P
1
t=1
h?Tt eh zh .
TN
Pruning cost: Assume the acquisition costs for the K features, {ck : k = 1, . . . , K}, are given. The
feature acquisition cost incurred by an example is the sum of the acquisition costs of unique features
acquired in the process of running the example through the forest. This cost structure arises due to
the assumption that an acquired feature is cached and subsequent usage by the same example incurs
no additional cost. Formally, the feature cost of classifying an example i on the ensemble T[T ] is
PK
given by Cfeature (T[T ] , x(i) ) = k=1 ck wk,i , where the binary variables wk,i serve as the indicators:
1 if feature k is used by x(i) in any Tt , t = 1, . . . , T
wk,i =
0 otherwise.
PN PK
The expected feature cost of a test example can be approximated as N1 i=1 k=1 ck wk,i .
In some scenarios, it is useful to account for computation cost along with feature acquisition cost
during prediction-time. In an ensemble, this corresponds to the expected number of Boolean
operations required running a test through the trees, which is equal to the expected depth of the trees.
PT P
This can be modeled as N1 t=1 h?Tt |Sh |dh zh , where dh is the depth of node h.
Putting it together: Having modeled the pruning constraints, prediction performance and costs,
we formulate the problem of pruning using the relationship between the node variables zh ?s and
feature usage variables wk,i ?s. Given a tree T , feature k, and example x(i) , let uk,i be the first node
associated with feature k on the root-to-leaf path the example follows in T . Feature k is used by
x(i) if and only if none of the nodes between the root and uk,i is a leaf. We represent this by the
P
constraint wk,i + h?p(uk,i ) zh = 1 for every feature k used by example x(i) in T . Recall wk,i
indicates whether or not feature k is used by example i and p(uk,i ) denotes the set of predecessor
nodes of uk,i . Intuitively, this constraint says that either the tree is pruned along the path followed
by example i before feature k is acquired, in which case zh = 1 for some node h ? p(uk,i ) and
wk,i = 0; or wk,i = 1, indicating that feature k is acquired for example i. We extend the notations to
(t)
(t)
ensemble pruning with tree index t: zh indicates whether node h in Tt is a leaf after pruning; wk,i
indicates whether feature k is used by the ith example in Tt ; wk,i indicates whether feature k is used
by the ith example in any of the T trees T1 , . . . , TT ; ut,k,i is the first node associated with feature k
on the root-to-leaf path the example follows in Tt ; Kt,i denotes the set of features the ith example
uses on tree Tt . We arrive at the following integer program.
? feature acquisition cost
?
computational cost
error
z
}|
{
z
}|
{ z
}|
{
?X
?
T
N
T
?K
?
1 X X (t) (t)
1 X
1 XX
?
min
eh zh +? ?
ck (
wk,i ) +
|Sh |dh zh ?
(IP)
?
(t)
(t)
N T t=1
N i=1
N t=1
zh ,wk,i ,
?k=1
?
h?Tt
h?Tt
wk,i ?{0,1}
s.t.
(t)
u?p(h) zu = 1,
P
(t)
(t)
wk,i + h?p(ut,k,i ) zh
(t)
wk,i ? wk,i ,
P
?h ? T?t , ?t ? [T ],
(feasible prunings)
= 1, ?k ? Kt,i , ?i ? S, ?t ? [T ], (feature usage/ tree)
?k ? [K], ?i ? S, ?t ? [T ]. (global feature usage)
Totally Unimodular constraints: Even though integer programs are NP-hard to solve in general,
we show that (IP) can be solved exactly by solving its LP relaxation. We prove this in two steps:
first, we examine the special structure of the equality constraints; then we examine the inequality
constraint that couples the trees. Recall that a network matrix is one with each column having exactly
one element equal to 1, one element equal to -1 and the remaining elements being 0. A network
matrix defines a directed graph with the nodes in the rows and arcs in the columns. We have the
following lemma.
4
z1
11
r1
r2
32
2
r3
r4
4
5
?
?
?
?
?
r5
1
1
1
1
1
z2
1
0
0
0
0
z3
z4
0
1
1
1
0
0
1
0
0
0
(1)
z5
w1,1
0
0
1
0
0
(1)
w2,1
0
0
0
0
1
0
0
0
1
0
?
?r1
r1 ?r2
?
?
?
?
r2 ?r3
r3 ?r4
r4 ?r5
r5
?
?
?
?
?
?
?
z1
z2
z3
z4
z5
w1,1
(1)
w2,1
?1
0
0
0
0
1
?1
1
0
0
0
0
0
?1
0
0
1
0
0
?1
1
0
0
0
0
0
?1
1
0
0
0
0
0
0
?1
1
0
0
0
?1
1
0
(1)
?
?
?
?
?
?
?
Figure 1: A decision tree example with node numbers and associated feature in subscripts together
with the constraint matrix and its equivalent network matrix form.
Lemma 3.1 The equality constraints in (IP) can be turned into an equivalent network matrix form
for each tree.
Proof We observe the first constraint
a path to be 1. The second constraints
(t)
wk,i .
(t)
u?p(h) zu = 1 requires the
P
(t)
(t)
wk,i + h?p(ut,k,i ) zh =
P
sum of the node variables along
1 has a similar sum except the
(t)
wk,i
variable
Imagine
as yet another node variable for a fictitious child node of ut,k,i and the
two equations are essentially equivalent. The rest of proof follows directly from the construction in
Proposition 3 of [18].
Figure 1 illustrates such a construction. The nodes are numbered 1 to 5. The subscripts at node 1
and 3 are the feature index used in the nodes. Since the equality constraints in (IP) can be separated
based on the trees, we consider only one tree and one example being routed to node 4 on the tree for
simplicity. The equality constraints can be organized in the matrix form as shown in the middle of
Figure 1. Through row operations, the constraint matrix can be transformed to an equivalent network
matrix. Such transformation always works as long as the leaf nodes are arranged in a pre-order
manner. Next, we deal with the inequality constraints and obtain our main result.
Theorem 3.2 The LP relaxation of (IP), where the 0-1 integer constraints are relaxed to interval
constraints [0, 1] for all integer variables, has integral optimal solutions.
Due to space limit the proof can be found in the Suppl. Material. The main idea is to show the
constraints are still totally unimodular even after adding the coupling constraints and the LP relaxed
polyhedron has only integral extreme points [17]. As a result, solving the LP relaxation results in the
optimal solution to the integer program (IP), allowing for polynomial time optimization. 3
Algorithm 1 B UDGET P RUNE
During Training: input - ensemble(T1 , . . . , TT ), training/validation data with labels, ?
(t)
1: initialize dual variables ?k,i ? 0.
(t)
(t)
2: update zh , wk,i for each tree t (shortest-path algo). wk,i = 0 if ?k,i > 0, wk,i = 1 if ?k,i < 0.
(t)
(t)
(t)
3: ?k,i ? [?k,i + ?(wk,i ? wk,i )]+ for step size ?, where [?]+ = max{0, ?}.
4: go to Step 2 until duality gap is small enough.
During Prediction: input - test example x
5: Run x on each tree to leaf, obtain the probability distribution over label classes pt at leaf.
PT
6: Aggregate p = T1
t=1 pt . Predict the class with the highest probability in p.
4
A Primal-Dual Algorithm
Even though we can solve (IP) via its LP relaxation, the resulting LP can be too large in practical
applications for any general-purpose LP solver. In particular, the number of variables and constraints
is roughly O(T ? |Tmax | + N ? T ? Kmax ), where T is the number of trees; |Tmax | is the maximum
3
The nice result of totally unimodular constraints is due to our specific formulation. See Suppl. Material for
an alternative formulation that does not have such a property.
5
number of nodes in a tree; N is the number of examples; Kmax is the maximum number of features
an example uses in a tree. The runtime of the LP thus scales O(T 3 ) with the number of trees in the
ensemble, limiting the application to only small ensembles. In this section we propose a primal-dual
approach that effectively decomposes the optimization into many sub-problems. Each sub-problem
corresponds to a tree in the ensemble and can be solved efficiently as a shortest path problem. The
runtime per iteration is O( Tp (|Tmax | + N ? Kmax ) log(|Tmax | + N ? Kmax )), where p is the number of
processors. We can thus massively parallelize the optimization and scale to much larger ensembles as
(t)
the runtime depends only linearly on Tp . To this end, we assign dual variables ?k,i for the inequality
(t)
constraints wk,i ? wk,i and derive the dual problem.
max
(t)
min
(t)
?k,i ?0 zh ?[0,1]
1
NT
T X
X
(t) (t)
e?h zh
+?
t=1 h?Tt
(t)
K
X
k=1
!
N
T X
N
X
X (t) (t)
1 X
wk,i ) +
?k,i (wk,i ? wk,i )
ck (
N i=1
t=1 i=1
k?Kt,i
wk,i ?[0,1]
wk,i ?[0,1]
s.t.
X
u?p(h)
(t)
?h ? T?t , ?t ? [T ],
zu(t) = 1,
wk,i +
X
(t)
zh = 1, ?k ? Kt,i , ?i ? S, ?t ? [T ],
h?p(ut,k,i )
(t)
(t)
where for simplicity we have combined coefficients of zh in the objective of (IP) to e?h . The
primal-dual algorithm is summarized in Algorithm 1. It alternates between updating the primal
and the dual variables. The key is to observe that given dual variables, the primal problem (inner
minimization) can be decomposed for each tree in the ensemble and solved in parallel as shortest path
problems due to Lemma 3.1. (See also Suppl. Material). The primal variables wk,i can be solved in
P
(t)
closed form: simply compute ?k,i = ?ck /N ? t?Tk,i ?k,i , where Tk,i is the set of trees in which
example i encounters feature k. So wk,i should be set to 0 if ?k,i > 0 and wk,i = 1 if ?k,i < 0.
Note that our prediction rule aggregates the leaf distributions from all trees instead of just their
predicted labels. In the case where the leaves are pure (each leaf contains only one class of examples),
this prediction rule coincides with the majority vote rule commonly used in random forests. Whenever
the leaves contain mixed classes, this rule takes into account the prediction confidence of each tree
in contrast to majority voting. Empirically, this rule consistently gives lower prediction error than
majority voting with pruned trees.
5
Experiments
We test our pruning algorithm B UDGET P RUNE on four benchmark datasets used for prediction-time
budget algorithms. The first two datasets have unknown feature acquisition costs so we assign costs
to be 1 for all features; the aim is to show that B UDGET P RUNE successfully selects a sparse subset of
features on average to classify each example with high accuracy. 4 The last two datasets have real
feature acquisition costs measured in terms of CPU time. B UDGET P RUNE achieves high prediction
accuracy spending much less CPU time in feature acquisition.
For each dataset we first train a RF and apply B UDGET P RUNE on it using different ??s to obtain
various points on the accuracy-cost tradeoff curve. We use in-bag data to estimate error probability at
each node and the validation data for the feature cost variables wk,i ?s. We implement B UDGET P RUNE
using CPLEX [1] network flow solver for the primal update step. The running time is significantly
reduced (from hours down to minutes) compared to directly solving the LP relaxation of (IP) using
standard solvers such as Gurobi [10]. Futhermore, the standard solvers simply break trying to solve
the larger experiments whereas B UDGET P RUNE handles them with ease. We run the experiments for
10 times and report the means and standard deviations. Details of datasets and parameter settings of
competing methods are included in the Suppl. Material.
Competing methods: We compare against four other approaches. (i) B UDGET RF[16]:
the recursive node splitting process for each tree is stopped as soon as node impu4
In contrast to traditional sparse feature selection, our algorithm allows adaptivity, meaning different examples
use different subsets of features.
6
Test Accuracy
Test Accuracy
Average Precision@5
Test Accuracy
rity (entropy or Pairs) falls below a threshold.
The threshold is a measure of impurity tolerated in the leaf nodes.
This can be considered as a naive pruning method
as it reduces feature acquisition cost while maintaining low impurity in the leaves.
(ii)
Cost0.92
Complexity
0.93
0.9
Pruning (CCP)
0.88
[4]: it iteratively
0.92
prunes subtrees
0.86
0.91
such that the
0.84
resulting tree has
0.9
BudgetPrune
low error and
0.82
BudgetPrune
CCP [Breiman et al. 1984]
CCP [Breiman et al. 1984]
small size. We
BudgetRF [Nan et al. 2015]
BudgetRF [Nan et al. 2015]
0.89
0.8
GreedyPrune
GreedyPrune
perform
CCP
GreedyMiser [Xu et al. 2012]
GreedyMiser [Xu et al. 2012]
0.88
0.78
on
individual
5
10
15
20
25
30
35
40
8
10
12
14
16
18
20
22
Average Feature Cost
Average Feature Cost
trees to different
levels to obtain
(a) MiniBooNE
(b) Forest Covertype
0.84
various points on
0.14
the accuracy-cost
0.82
tradeoff curve.
0.135
CCP does not
0.8
take into account
0.13
0.78
feature costs. (iii)
BudgetPrune
G REEDY P RUNE:
0.76
CCP [Breiman et al. 1984]
BudgetPrune
0.125
BudgetRF [Nan et al. 2015]
CCP [Breiman et al. 1984]
is a greedy global
GreedyPrune
BudgetRF [Nan et al. 2015]
0.74
GreedyPrune
GreedyMiser [Xu et al. 2012]
feature pruning
GreedyMiser [Xu et al. 2012]
0.12
strategy
that
0.72
5
10
15
20
25
30
35
40
60
80
100
120
140
160
180
200
we propose; at
Average Feature Cost
Average Feature Cost
each
iteration
(c) Yahoo! Rank
(d) Scene15
it attempts to
Figure
2:
Comparison
of
B
UDGET P RUNE against CCP, B UDGET RF with early stopping,
remove all nodes
G REEDY P RUNE and G REEDY M ISER on 4 real world datasets. B UDGET P RUNE (red)
corresponding
outperforms competing state-of-art methods. G REEDY M ISER dominates ASTC [12],
to one feature CSTC [21] and DAG [20] significantly on all datasets. We omit them in the plots to clearly
from the RF such depict the differences between competing methods.
that the resulting
pruned RF has the lowest training error and average feature cost. The process terminates in at most K
iterations, where K is the number of features. The idea is to reduce feature costs by successively
removing features that result in large cost reduction yet small accuracy loss. We also compare against
the state-of-the-art methods in budgeted learning (iv) G REEDY M ISER [22]: it is a modification of
gradient boosted regression tree [8] to incorporate feature cost. Specifically, each weak learner (a
low-depth decision tree) is built to minimize squared loss with respect to current gradient at the
training examples plus feature acquisition cost. To build each weak learner the feature costs are set
to zero for those features already used in previous weak learners. Other prediction-time budget
algorithms such as ASTC [12], CSTC [21] and cost-weighted l-1 classifiers are shown to perform
strictly worse than G REEDY M ISER by a significant amount [12, 16] so we omit them in our plots.
Since only the feature acquisition costs are standardized, for fair comparison we do not include the
computation cost term in the objective of (IP) and focus instead on feature acquisition costs.
MiniBooNE Particle Identification and Forest Covertype Datasets:[7] Feature costs are uniform
in both datasets. Our base RF consists of 40 trees using entropy split criteria and choosing from
the full set of features at each split. As shown in (a) and (b) of Figure 2, B UDGET P RUNE (in red)
achieves the best accuracy-cost tradeoff. The advantage of B UDGET P RUNE is particularly large in (b).
G REEDY M ISER has lower accuracy in the high budget region compared to B UDGET P RUNE in (a)
and significantly lower accuracy in (b). The gap between B UDGET P RUNE and other pruning methods
is small in (a) but much larger in (b). This indicates large gains from globally encouraging feature
sharing in the case of (b) compared to (a). In both datasets, B UDGET P RUNE successfully prunes
away large number of features while maintaining high accuracy. For example in (a), using only 18
unique features on average instead of 40, we can get essentially the same accuracy as the original RF.
Yahoo! Learning to Rank:[6] This ranking dataset consists of 473134 web documents and 19944
queries. Each example in the dataset contains features of a query-document pair together with the
7
relevance rank of the document to the query. There are 141397/146769/184968 examples in the
training/validation/test sets. There are 519 features for each example; each feature is associated with
an acquisition cost in the set {1, 5, 20, 50, 100, 150, 200}, which represents the units of CPU time
required to extract the feature and is provided by a Yahoo! employee. The labels are binarized so that
the document is either relevant or not relevant to the query. The task is to learn a model that takes a
new query and its associated set of documents to produce an accurate ranking using as little feature
cost as possible. As in [16], we use the Average Precision@5 as the performance metric, which gives
a high reward for ranking the relevant documents on top. Our base RF consists of 140 trees using
cost weighted entropy split criteria as in [16] and choosing from a random subset of 400 features
at each split. As shown in (c) of Figure 2, B UDGET P RUNE achieves similar ranking accuracy as
G REEDY M ISER using only 30% of its cost.
Scene15 [13]: This scene recognition dataset contains 4485 images from 15 scene classes (labels).
Following [22] we divide it into 1500/300/2685 examples for training/validation/test sets. We use
a diverse set of visual descriptors and object detectors from the Object Bank [14]. We treat each
individual detector as an independent descriptor so we have a total of 184 visual descriptors. The
acquisition costs of these visual descriptors range from 0.0374 to 9.2820. For each descriptor we train
15 one-vs-rest kernel SVMs and use the output (margins) as features. Once any feature corresponding
to a visual descriptor is used for a test example, an acquisition cost of the visual descriptor is incurred
and subsequent usage of features from the same group is free for the test example. Our base RF
consists of 500 trees using entropy split criteria and choosing from a random subset of 20 features at
each split. As shown in (d) of Figure 2, B UDGET P RUNE and G REEDY P RUNE significantly outperform
other competing methods. B UDGET P RUNE has the same accuracy at the cost of 9 as at the full cost of
32. B UDGET P RUNE and G REEDY P RUNE perform similarly, indicating the greedy approach happen
to solve the global optimization in this particular initial RF.
5.1
Discussion & Concluding Comments
We have empirically evaluated several resource constrained learning algorithms including B UDGETP RUNE and its variations on benchmarked datasets here and in the Suppl. Material. We highlight
key features of our approach below. (i) S TATE - OF - THE - ART M ETHODS. Recent work has established that G REEDY M ISER and B UDGET RF are among the state-of-the-art methods dominating
a number of other methods [12, 21, 20] on these benchmarked datasets. G REEDY M ISER requires
building class-specific ensembles and tends to perform poorly and is increasingly difficult to tune
in multi-class settings. RF, by its nature, can handle multi-class settings efficiently. On the other
hand, as we described earlier, [12, 20, 21] are fundamentally "tree-growing" approaches, namely
they are top-down methods acquiring features sequentially based on a surrogate utility value. This
is a fundamentally combinatorial problem that is known to be NP hard [5, 21] and thus requires a
number of relaxations and heuristics with no guarantees on performance. In contrast our pruning
strategy is initialized to realize good performance (RF initialization) and we are able to globally
optimize cost-accuracy objective. (ii) VARIATIONS ON P RUNING. By explicitly modeling feature
costs, B UDGET P RUNE outperforms other pruning methods such as early stopping of B UDGET RF and
CCP that do not consider costs. G REEDY P RUNE performs well validating our intuition (see Table. 1)
that pruning sparsely occurring feature nodes utilized by large fraction of examples can improve
test-time cost-accuracy tradeoff. Nevertheless, the B UDGET P RUNE outperforms G REEDY P RUNE,
which is indicative of the fact that apart from obvious high-budget regimes, node-pruning must
account for how removal of one node may have an adverse impact on another downstream one. (iii)
S ENSITIVITY TO I MPURITY, F EATURE C OSTS , & OTHER INPUTS. We explore these issues in Suppl.
Material. We experiment B UDGET P RUNE with different impurity functions such as entropy and Pairs
[16] criteria. Pairs-impurity tends to build RFs with lower cost but also lower accuracy compared
to entropy and so has poorer performance. We also explored how non-uniform costs can impact
cost-accuracy tradeoff. An elegant approach has been suggested by [2], who propose an adversarial
feature cost proportional to feature utility value. We find that B UDGET P RUNE is robust with such
costs. Other RF parameters including number of trees and feature subset size at each split do impact
cost-accuracy tradeoff in obvious ways with more trees and moderate feature subset size improving
prediction accuracy while incurring higher cost.
Acknowledgment: We thank Dr Kilian Weinberger for helpful discussions and Dr David Castanon
for the insights on the primal dual algorithm. This material is based upon work supported in part by
NSF Grants CCF: 1320566, CNS: 1330008, CCF: 1527618, DHS 2013-ST-061-ED0001, ONR Grant
50202168 and US AF contract FA8650-14-C-1728.
8
References
http://www-01.ibm.com/software/
[1] IBM ILOG CPLEX Optimizer.
integration/optimization/cplex-optimizer/, 2010.
[2] Djalel Benbouzid. Sequential prediction for budgeted learning : Application to trigger design.
Theses, Universit? Paris Sud - Paris XI, February 2014.
[3] L. Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[4] L. Breiman, J. Friedman, C. J. Stone, and R A Olshen. Classification and regression trees. CRC
press, 1984.
[5] Venkatesan T. Chakaravarthy, Vinayaka Pandit, Sambuddha Roy, Pranjal Awasthi, and
Mukesh K. Mohania. Decision trees for entity identification: Approximation algorithms
and hardness results. ACM Trans. Algorithms, 7(2):15:1?15:22, March 2011.
[6] O. Chapelle, Y. Chang, and T. Liu, editors. Proceedings of the Yahoo! Learning to Rank
Challenge, held at ICML 2010, Haifa, Israel, June 25, 2010, 2011.
[7] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[8] J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of
Statistics, 29:1189?1232, 2000.
[9] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural
Information Processing Systems (NIPS), 2011.
[10] Gurobi Optimization Inc. Gurobi optimizer reference manual, 2015.
[11] V.Y. Kulkarni and P.K. Sinha. Pruning of random forest classifiers: A survey and future
directions. In International Conference on Data Science Engineering (ICDSE), 2012.
[12] M. Kusner, W. Chen, Q. Zhou, E. Zhixiang, K. Weinberger, and Y. Chen. Feature-cost sensitive
learning with submodular trees of classifiers. In AAAI, 2014.
[13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. In IEEE CVPR, 2006.
[14] L. J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object Bank: A High-Level Image Representation
for Scene Classification and Semantic Feature Sparsification. In NIPS. 2010.
[15] X. Li, J. Sweigart, J. Teng, J. Donohue, and L. Thombs. A dynamic programming based pruning
method for decision trees. INFORMS J. on Computing, 13(4):332?344, September 2001.
[16] F. Nan, J. Wang, and V. Saligrama. Feature-budgeted random forest. In Proceedings of the 32nd
International Conference on Machine Learning (ICML-15), 2015.
[17] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing
submodular set functions. Mathematical Programming, 14(1):265?294, 1978.
[18] H. D. Sherali, A. G. Hobeika, and C. Jeenanunta. An optimal constrained pruning strategy for
decision trees. INFORMS Journal on Computing, 21(1):49?61, 2009.
[19] K. Trapeznikov and V. Saligrama. Supervised sequential classification under budget constraints.
In International Conference on Artificial Intelligence and Statistics, pages 581?589, 2013.
[20] J. Wang, K. Trapeznikov, and V. Saligrama. Efficient learning by directed acyclic graph for
resource constrained prediction. In Advances in Neural Information Processing Systems. 2015.
[21] Z. Xu, M. Kusner, M. Chen, and K. Q. Weinberger. Cost-sensitive tree of classifiers. In
Proceedings of the 30th International Conference on Machine Learning, 2013.
[22] Z. E. Xu, K. Q. Weinberger, and O. Chapelle. The greedy miser: Learning under test-time
budgets. In Proceedings of the International Conference on Machine Learning, ICML, 2012.
[23] Yi Zhang and Huang Huei-chuen. Decision tree pruning via integer programming. Working
paper, 2005.
9
| 6250 |@word repository:1 middle:1 polynomial:2 nd:1 seek:3 incurs:2 harder:1 reduction:2 initial:1 liu:1 contains:3 sherali:1 offering:1 ours:1 document:6 greedymiser:4 outperforms:5 existing:4 err:2 current:1 z2:2 nt:1 com:1 yet:2 must:2 realize:2 subsequent:2 happen:1 remove:1 plot:2 update:2 depict:1 v:2 greedy:4 leaf:21 intelligence:1 indicative:1 parametrization:1 ith:4 boosting:1 node:56 zhang:1 mathematical:1 along:3 direct:1 predecessor:2 prove:2 consists:4 manner:1 introduce:1 acquired:8 indeed:1 hardness:1 expected:7 roughly:1 examine:2 growing:4 multi:3 sud:1 globally:3 decomposed:1 cpu:3 curse:1 encouraging:1 solver:4 totally:4 little:1 features1:1 spain:1 notation:2 xx:1 provided:1 lowest:1 what:1 israel:1 benchmarked:3 pursue:1 lowing:1 sparsification:1 transformation:1 guarantee:1 every:1 binarized:1 voting:2 unimodular:4 runtime:3 exactly:3 universit:1 classifier:13 uk:6 unit:1 grant:2 omit:2 before:3 negligible:1 engineering:4 t1:4 treat:2 tends:2 limit:1 establishing:1 meet:1 parallelize:2 subscript:3 path:10 tmax:4 plus:1 initialization:3 studied:1 r4:3 specifying:2 ease:1 range:1 directed:2 unique:4 practical:1 enforces:1 acknowledgment:1 recursive:1 implement:1 area:1 empirical:2 significantly:4 matching:1 pre:1 confidence:1 numbered:1 get:1 ccp:12 undesirable:1 selection:1 risk:1 kmax:4 optimize:2 conventional:1 equivalent:5 lagrangian:1 scene15:2 www:1 maximizing:1 go:1 attention:1 survey:2 formulate:3 simplicity:2 splitting:1 rune:29 pure:1 rule:6 insight:1 utilizing:1 handle:2 traditionally:1 variation:2 limiting:1 annals:1 construction:3 pt:5 imagine:1 trigger:1 programming:3 us:2 element:3 roy:1 satisfying:2 approximated:1 updating:1 particularly:1 recognition:1 utilized:1 vinayaka:1 sparsely:2 bottom:3 wang:3 electrical:2 parameterize:1 solved:4 region:1 kilian:1 cstc:2 trade:1 highest:1 intuition:1 complexity:2 reward:1 dynamic:2 solving:7 algo:1 impurity:6 serve:1 upon:1 learner:3 represented:1 various:2 train:3 separated:1 duction:1 query:5 artificial:1 aggregate:2 choosing:3 heuristic:3 larger:5 solve:7 dominating:1 say:2 cvpr:1 otherwise:3 ability:2 statistic:2 jointly:1 itself:1 final:1 ip:10 sequence:1 advantage:1 propose:7 fea:2 saligrama:4 turned:1 relevant:3 monetary:1 uci:1 flexibility:1 poorly:1 parent:1 adap:1 r1:3 produce:3 cached:1 tk:2 object:3 depending:1 develop:2 derive:2 pose:1 coupling:2 measured:1 informs:2 job:2 solves:1 paying:1 predicted:3 involves:1 direction:1 annotated:1 material:7 pandit:1 crc:1 assign:2 generalization:3 alleviate:1 proposition:1 strictly:1 miniboone:3 considered:1 trapeznikov:2 predict:1 achieves:3 early:2 optimizer:3 purpose:1 bag:2 combinatorial:3 label:10 sensitive:3 successfully:2 weighted:2 minimization:2 awasthi:1 clearly:1 always:1 aim:2 rather:2 ck:6 pn:1 zhou:1 breiman:7 surveillance:2 boosted:1 conjunction:1 focus:6 june:1 ponce:1 consistently:1 polyhedron:1 indicates:5 rank:4 contrast:5 adversarial:1 helpful:1 tate:1 stopping:2 entire:1 koller:1 misclassified:2 transformed:1 selects:1 issue:1 overall:1 dual:11 classification:9 flexible:1 among:2 yahoo:4 constrained:7 art:7 special:1 initialize:1 integration:1 equal:3 construct:4 once:3 having:2 never:1 aware:1 represents:2 r5:3 dhs:1 icml:3 future:1 np:2 report:1 fundamentally:2 few:1 modern:1 composed:1 simultaneously:1 individual:2 cplex:3 cns:1 n1:3 attempt:2 friedman:2 interest:1 arrives:1 sh:4 yielding:1 extreme:1 primal:10 superfluous:1 held:1 futhermore:1 subtrees:1 kt:4 accurate:1 poorer:1 integral:2 capable:1 orthogonal:1 tree:80 iv:3 divide:1 initialized:1 re:3 benbouzid:1 haifa:1 minimal:2 sinha:2 stopped:1 instance:1 column:5 earlier:2 aries:1 boolean:1 classify:1 modeling:1 tp:2 cost:95 deviation:1 subset:8 uniform:2 delay:1 recognizing:1 too:1 optimally:1 stored:1 tolerated:1 combined:1 adaptively:1 st:1 international:5 bu:3 contract:1 off:1 together:3 w1:2 again:1 squared:1 thesis:1 successively:1 containing:1 aaai:1 huang:1 collapsing:1 worse:1 dr:2 creating:1 leading:1 li:3 account:10 diversity:1 summarized:1 wk:37 coefficient:1 inc:1 explicitly:2 ranking:4 depends:1 root:7 break:1 closed:1 red:2 start:1 xing:1 maintains:1 parallel:2 asuncion:1 contribution:1 minimize:4 accuracy:28 descriptor:7 who:1 efficiently:2 ensemble:25 yield:2 conceptually:1 weak:3 identification:2 none:1 budgetprune:4 randomness:1 processor:1 detector:2 whenever:1 sharing:1 manual:1 definition:1 against:3 acquisition:23 involved:2 obvious:2 associated:9 proof:3 couple:1 gain:1 dataset:5 adjusting:1 popular:1 recall:2 ut:5 dimensionality:1 organized:1 higher:1 supervised:2 wherein:1 formulation:3 arranged:1 though:2 evaluated:1 just:1 stage:3 until:1 zhixiang:1 hand:1 eqn:4 working:1 web:3 su:1 joewang:1 defines:1 runing:1 usage:10 building:1 requiring:2 true:1 contain:1 ccf:2 equality:4 assigned:1 iteratively:1 semantic:1 deal:1 during:7 encourages:2 acquires:1 unimodularity:2 rooted:1 coincides:1 criterion:4 djalel:1 trying:1 stone:1 tt:14 tn:1 performs:1 spending:1 meaning:1 consideration:1 novel:2 image:2 lazebnik:1 specialized:1 empirically:5 extend:1 employee:1 significant:3 dag:1 z4:2 pointed:1 iser:8 particle:1 similarly:1 submodular:2 donohue:1 chapelle:2 longer:1 base:3 recent:3 udgetp:1 apart:1 moderate:1 scenario:1 massively:1 inequality:3 binary:3 onr:1 meeting:1 yi:1 preserving:1 greater:1 additional:2 relaxed:3 prune:10 aggregated:1 shortest:3 venkatesan:1 ii:3 full:2 reduces:2 af:1 long:1 astc:2 z5:2 prediction:27 scalable:2 regression:2 impact:3 ed0001:1 essentially:2 metric:1 iteration:3 represent:1 adopting:1 kernel:1 suppl:6 pyramid:1 whereas:1 interval:1 w2:2 rest:2 comment:1 subject:2 validating:1 elegant:1 incorporates:1 leveraging:1 flow:2 integer:13 extracting:1 call:1 iii:3 enough:1 ture:1 split:7 competing:5 reduce:3 idea:2 inner:1 tradeoff:12 sibling:1 whether:4 motivated:1 utility:4 reuse:1 penalty:1 routed:3 returned:1 fa8650:1 repeatedly:1 dramatically:1 generally:1 useful:1 tune:1 amount:1 extensively:1 svms:1 category:1 reduced:2 http:1 outperform:1 percentage:1 nsf:1 per:1 diverse:1 express:1 group:1 key:3 putting:1 four:2 threshold:2 nevertheless:1 drawn:1 budgeted:5 graph:2 relaxation:9 downstream:1 fraction:1 sum:4 miser:1 run:2 place:1 family:1 throughout:1 arrive:1 decision:18 scaling:1 internet:1 nan:6 followed:1 covertype:2 constraint:38 fei:2 scene:4 software:1 extremely:1 min:3 pruned:12 attempting:2 concluding:1 px:1 alternate:1 march:1 across:2 smaller:1 terminates:1 increasingly:1 kusner:2 lp:10 joseph:1 making:1 modification:1 intuitively:2 resource:10 equation:1 r3:3 needed:1 end:1 parametrize:1 operation:2 incurring:3 apply:1 observe:2 away:1 alternative:1 encounter:1 weinberger:4 original:2 top:5 running:4 denotes:2 remaining:1 standardized:1 include:1 maintaining:3 pushing:1 exploit:1 build:2 establish:1 february:1 feng:1 objective:3 already:1 strategy:4 traditional:2 surrogate:2 september:1 gradient:3 lends:1 nemhauser:1 thank:1 entity:1 majority:6 srv:1 index:3 modeled:2 relationship:1 z3:2 minimizing:2 difficult:1 olshen:1 frank:1 implementation:1 design:1 unknown:2 perform:4 allowing:2 ilog:1 datasets:16 arc:1 benchmark:1 immediate:1 extended:1 incorporated:1 cost0:1 tive:1 venkatesh:1 namely:1 paris:2 introduced:2 pair:5 t1n:1 required:2 connection:1 z1:2 security:1 gurobi:3 ethods:1 engine:1 eature:1 established:1 barcelona:1 hour:1 nip:3 trans:1 able:2 suggested:1 beyond:1 below:2 regime:1 challenge:2 program:13 rf:41 including:4 max:2 video:1 built:1 david:1 natural:1 eh:5 indicator:1 scheme:1 improve:2 conventionally:1 naive:1 extract:1 schmid:1 nice:1 removal:1 zh:18 fully:1 loss:3 highlight:1 mixed:1 adaptivity:1 wolsey:1 filtering:1 tures:1 fictitious:1 proportional:1 castanon:1 acyclic:1 validation:4 incurred:5 unpruned:1 editor:1 bank:2 classifying:1 share:1 ibm:2 row:2 pranjal:1 supported:1 last:1 soon:1 free:1 fall:2 face:2 taking:2 sparse:2 benefit:1 boundary:1 dpp:2 depth:3 evaluating:1 valid:3 curve:2 world:1 commonly:1 adaptive:2 jump:1 avoided:1 spam:1 pruning:54 approximate:2 global:4 overfitting:1 sequentially:1 active:1 corpus:1 assumed:1 discriminative:1 xi:1 search:2 continuous:1 decomposes:1 table:3 learn:4 terminate:1 nature:2 robust:1 forest:14 improving:1 complex:3 constructing:2 pk:2 main:2 linearly:1 arise:1 repeated:1 complementary:1 child:1 fair:1 xu:6 rithm:1 precision:2 sub:2 lie:1 spatial:1 down:5 theorem:1 minute:1 removing:1 specific:2 zu:4 list:1 r2:3 explored:1 dominates:1 intractable:2 sequential:3 adding:2 effectively:1 chuen:1 subtree:2 budget:15 illustrates:1 occurring:1 margin:1 gap:2 reedy:14 boston:3 chen:3 entropy:7 simply:2 explore:1 gao:1 visual:5 contained:2 recommendation:1 udget:26 chang:1 acquiring:3 corresponds:2 dh:3 acm:1 viewed:3 goal:1 fisher:1 feasible:1 hard:2 adverse:1 included:1 typical:1 except:1 reducing:1 specifically:1 degradation:1 lemma:3 total:6 teng:1 duality:1 vote:3 indicating:2 selectively:1 formally:1 internal:2 arises:1 relevance:1 kulkarni:2 incorporate:2 avoiding:1 ex:1 |
5,804 | 6,251 | Learning Sensor Multiplexing
Design through Back-propagation
Ayan Chakrabarti
Toyota Technological Institute at Chicago
6045 S. Kenwood Ave., Chicago, IL
[email protected]
Abstract
Recent progress on many imaging and vision tasks has been driven by the use of
deep feed-forward neural networks, which are trained by propagating gradients of
a loss defined on the final output, back through the network up to the first layer that
operates directly on the image. We propose back-propagating one step further?to
learn camera sensor designs jointly with networks that carry out inference on the
images they capture. In this paper, we specifically consider the design and inference
problems in a typical color camera?where the sensor is able to measure only one
color channel at each pixel location, and computational inference is required to
reconstruct a full color image. We learn the camera sensor?s color multiplexing
pattern by encoding it as layer whose learnable weights determine which color
channel, from among a fixed set, will be measured at each location. These weights
are jointly trained with those of a reconstruction network that operates on the
corresponding sensor measurements to produce a full color image. Our network
achieves significant improvements in accuracy over the traditional Bayer pattern
used in most color cameras. It automatically learns to employ a sparse color
measurement approach similar to that of a recent design, and moreover, improves
upon that design by learning an optimal layout for these measurements.
1
Introduction
With the availability of cheap computing power, modern cameras can rely on computational postprocessing to extend their capabilities under the physical constraints of existing sensor technology.
Sophisticated techniques, such as those for denoising [3, 28], deblurring [19, 26], etc., are increasingly
being used to improve the quality of images and videos that were degraded during acquisition.
Moreover, researchers have posited novel sensing strategies that, when combined with post-processing
algorithms, are able to produce higher quality and more informative images and videos. For example,
coded exposure imaging [18] allows better inversion of motion blur, coded apertures [14, 23] allow
passive measurement of scene depth from a single shot, and compressive measurement strategies [1,
8, 25] combined with sparse reconstruction algorithms allow the recovery of visual measurements
with higher spatial, spectral, and temporal resolutions.
Key to the success of these latter approaches is the co-design of sensing strategies and inference
algorithms, where the measurements are designed to provide information complimentary to the
known statistical structure of natural scenes. So far, sensor design in this regime has largely been
either informed by expert intuition (e.g., [4]), or based on the decision to use a specific image model
or inference strategy?e.g., measurements corresponding to random [1], or dictionary-specific [5],
projections are a common choice for sparsity-based reconstruction methods. In this paper, we seek to
enable a broader data-driven exploration of the joint sensor and inference method space, by learning
both sensor design and the computational inference engine end-to-end.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: We propose a method to learn the optimal color multiplexing pattern for a camera through
joint training with a neural network for reconstruction. (Top) Given C possible color filters that could
be placed at each pixel, we parameterize the incident light as a C?channel image. This acts as input
to a ?sensor layer? that learns to select one of these channel at each pixel. A reconstruction network
then processes these measurements to yield a full-color RGB image. We jointly train both for optimal
reconstruction quality. (Bottom left) Since the hard selection of individual color channels is not
differentiable, we encode these decisions using a Soft-max layer, with a ?temperature? parameter ?
that is increased across iterations. (Bottom right) We use a bifurcated architecture with two paths
for the reconstruction network. One path produces K possible values for each color intensity through
multiplicative and linear interpolation, and the other weights to combine these into a single estimate.
We leverage the successful use of back-propagation and stochastic gradient descent (SGD) [13] in
learning deep neural networks for various tasks [12, 16, 20, 24]. These networks process a given
input through a complex cascade of layers, and training is able to jointly optimize the parameters of
all layers to enable the network to succeed at the final inference task. Treating optical measurement
and computational inference as a cascade, we propose using the same approach to learn both jointly.
We encode the sensor?s design choices into the learnable parameters of a ?sensor layer? which, once
trained, can be instantiated by camera optics. This layer?s output is fed to a neural network that carries
out inference computationally on the corresponding measurements. Both are then trained jointly.
We demonstrate this approach by applying it to the sensor-inference design problem in a standard
digital color camera. Since image sensors can physically measure only one color channel at each
pixel, cameras spatially multiplex the measurement of different colors across the sensor plane, and
then computationally recover the missing intensities through a reconstruction process known as
demosaicking. We jointly learn the spatial pattern for multiplexing different color channels?that
requires making a hard decision to use one of a discrete set of color filters at each pixel?along with
a neural network that performs demosaicking. Together, these enable the recovery of high-quality
color images of natural scenes. We find that our approach significantly outperforms the traditional
Bayer pattern [2] used in most color cameras. We also compare it to a recently introduced design [4]
based on making sparse color measurements, that has superior noise performance and fewer aliasing
artifacts. Interestingly, our network automatically learns to employ a similar measurement strategy,
but is able outperform this design by finding a more optimal spatial layout for the color measurements.
2
2
Background
Since both CMOS and CCD sensors can measure only the total intensity of visible light incident on
them, color is typically measured by placing an array of color filters (CFA) in front of the sensor
plane. The CFA pattern determines which color channel is measured at which pixel, with the most
commonly pattern used in RGB color cameras being the Bayer mosaic [2] introduced in 1976. This
is a 4 ? 4 repeating pattern, with two measurements of the green channel and one each of red and
blue. The color values that are not directly measured are then reconstructed computationally by
demosaciking algorithms. These algorithms [15] typically rely on the assumption that different
color channels are correlated and piecewise smooth, and reason about locations of edges and other
high-frequency image content to avoid creating aliasing artifacts.
This approach yields reasonable results, and the Bayer pattern remains in widespread use even today.
However, the choice of the CFA pattern involves a trade-off. Color filters placed in front of the sensor
block part of the incident light energy, leading to longer exposure times or noisier measurements (in
comparison to grayscale cameras). Moreover, since every channel is regularly sub-sampled in the
Bayer pattern, reconstructions are prone to visually disturbing aliasing artifacts even with the best
reconstruction methods. Most consumer cameras address this by placing an anti-aliasing filter in
front of the sensor to blur the incident light field, but this leads to a loss of sharpness and resolution.
To address this, Chakrabarti et al. [4] recently proposed the use of an alternative CFA pattern in which
a majority of the pixels measure the total unfiltered visible light intensity. Color is measured only
sparsely, using 2 ? 2 Bayer blocks placed at regularly spaced intervals on the otherwise unfiltered
sensor plane. The resulting measured image corresponds to an un-aliased full resolution luminance
image (i.e., the unfiltered measurements) with ?holes? at the color sampling site; with point-wise
color information on a coarser grid. The reconstruction algorithm in [4] is significantly different
from traditional demosaicking, and involves first recovering missing luminance values by hole-filling
(which is computationally easier than up-sampling since there is more context around the missing
intensities), and then propagating chromaticities from the color measurement sites to the remaining
pixels using edges in the luminance image as a guide. This approach was shown to significantly
improve upon the capabilities of a Bayer sensor?in terms of better noise performance, increased
sharpness, and reduced aliasing artifacts.
That [4]?s CFA pattern required a very different reconstruction algorithm illustrates the fact that
both the sensor and inference method need to be modified together to achieve gains in performance.
In [4]?s case, this was achieved by applying an intuitive design principles?of making high SNR
non-aliased measurements of one color channel. However, these principles are tied to a specific
reconstruction approach, and do not tell us, for example, whether regularly spaced 2 ? 2 blocks are
the optimal way of measuring color sparsely.
While learning-based methods have been proposed for demosaicking [10, 17, 22] (as well as for joint
demosaicking and denoising [9, 11]), these work with a pre-determined CFA pattern and training is
used only to tune the reconstruction algorithm. In contrast, our approach seeks to learn, automatically
from data, both the CFA pattern and reconstruction method, so that they are jointly optimal in terms
of reconstruction quality.
3
Jointly Learning Measurement and Reconstruction
We formulate our task as that of reconstructing an RGB image y(n) ? R3 , where n ? Z2 indexes
pixel location, from a measured sensor image s(n) ? R. Along with this reconstruction task,
we also have to choose a multiplexing pattern which determines the color channel that each s(n)
corresponds to. We let this choice be between one of C channels?a parameterization that takes
into account which spectral filters can be physically synthesized. We use x(n) ? RC denote the
intensity measurements corresponding to each of these color channels, and a zero-one selection map
I(n) ? {0, 1}C , |I(n)| = 1 to encode the multiplexing pattern, such that the corresponding sensor
measurements are given by s(n) = I(n)T x(n). Moreover, we assume that I(n) repeats periodically
every P pixels, and therefore only has P 2 unique values.
Given a training set consisting of pairs of output images y(n) and C-channel input images x(n),
our goal then is to learn this pattern I(n), jointly with a reconstruction algorithm that maps the
corresponding measurements s(n) to the full color image output y(n). We use a neural network
3
to map sensor measurements s(n) to an estimate y?(n) of the full color image. Furthermore, we
encode the measurement process into a ?sensor layer?, which maps the input x(n) to measurements
s(n), and whose learnable parameters encode the multiplexing pattern I(n). We then learn both
the reconstruction network and the sensor layer simultaneously, with respect to a squared loss
k?
y (n) ? y(n)k2 between the reconstructed and true color images.
3.1
Learning the Multiplexing Pattern
The key challenge to our joint learning problem lies in recovering the optimal multiplexing pattern
I(n), since it is ordinal-valued and requires learning to make a hard non-differentiable decision
between C possibilities. To address this, we rely on the standard soft-max operation, which is
traditionally used in multi-label classification tasks.
However, we are unable to use the soft-max operation directly?unlike in classification tasks where
the ordinal labels are the final output, and where the training objective prefers hard assignment
to a single label, in our formulation I(n) is used to generate sensor measurements that are then
processed by a reconstruction network. Indeed, when using a straight soft-max, we find that the
reconstruction network converges to real-valued I(n) maps that correspond to measuring different
weighted combinations of the input channels. Thresholding the learned I(n) to be ordinal valued
leads to a significant drop in performance, even when we further train the reconstruction network to
work with this thresholded version.
Our solution to this is fairly simple. We use a soft-max with a temperature parameter that is increased
slowly through training iterations. Specifically, we learn a vector w(n) ? RC for each location n of
the multiplexing pattern, with the corresponding I(n) given during training as:
I(n) = Soft-max [?t w(n)] ,
(1)
where ?t is a scalar factor that we increase with iteration number t.
Therefore, in addition to changes due to the SGD updates to w(n), the effective distribution of I(n)
become ?peakier? at every iteration because of the increasing ?t , and as ?t ? ?, I(n) becomes a
zero-one vector. Note that the gradient magnitudes of w(n) also scale-up, since we compute these
gradients at each iteration with respect to the current value of t. This ensures that the pattern can keep
learning in the presence of a strong supervisory signal from the loss, while retaining a bias to drift
towards making a hard choice for a single color channel.
As illustrated in Fig. 1, our sensor layer contains a parameter vector w(n) for each pixel of the
P ? P multiplexing pattern. During training, we generate the corresponding I(n) vectors using
(1) above, and the layer then outputs sensor measurements based on the C-channel input x(n) as
s(n) = I(n)T x(n). Once training is complete (and for validation during training), we replace I(n)
with its zero-one version as I(n)c = 1 for c = arg maxc wc (n), and 0 otherwise.
As we report in Sec. 4, our approach is able to successfully learn an optimal sensing pattern, which
adapts during training to match the evolving reconstruction network. We would also like to note here
two alternative strategies that we explored to learn an ordinal I(n), which were not as successful. We
considered using a standard soft-max approach with a separate entropy penalty on the distribution
I(n)?however, this caused the pattern I(n) to stop learning very early during training (or for lower
weighting of the penalty, had no effect at all). We also tried to incrementally pin the lowest I(n)
values to zero after training for a number of iterations, in a manner similar to Han et al.?s [7] approach
to network compression. However, even with significant tuning, this approach caused a large parts of
the pattern search space to be eliminated early, and was not able to adapt to the fact that a channel
with a low weight at a particular location might eventually become desirable based on changes to the
pattern at other locations, and corresponding updates to the reconstruction network.
3.2
Reconstruction Network Architecture
Traditional demosaicking algorithms [15] produce a full color image by interpolating the missing
color values from neighboring measurement sites, and by exploiting cross-channel dependencies.
This interpolation is often linear, but in some cases takes the form of transferring chromaticities or
color ratios (e.g., in [4]). Moreover, most demosaicking algorithms reason about image textures and
edges to avoid smoothing across boundaries or creating aliasing artifacts.
4
We adopt a simple bifurcated network architecture that leverages these intuitions. As illustrated in
Fig. 1, our network reconstructs each P ? P patch in y(n) from a receptive field that is centered on
that patch in the measured image s(n), and thrice as large in each dimension. The network has two
paths, both of operate on the entire input and both output (P ? P ? 3K) values, i.e., K values for
each output color intensity. We denote these outputs as ?(n, k), f (n, k) ? R3 .
One path produces f (n, k) by first computing multiplicative combinations of the entire 3P ? 3P
input patch?we instantiate this using a fully-connected layer without a bias term that operates in
the log-domain?followed by a linear combinations across each of the 3K values at each location.
We interpret these f (n, k) values as K proposals for each y(n). The second path uses a more
standard cascade of convolution layers?all of which have F outputs with the first layer having a
stride of P ?followed by a fully connected layer that produces the outputs ?(n, k) with the same
dimensionality as f (n, k). We treat ?(n,
P k) as gating values for the proposals f (n, k), and generate
the final reconstructed patch y?(n) as k ?(n, k)f (n, k).
4
Experiments
We follow a similar approach to [4] for training and evaluating our method. Like [4], we use
the Gehler-Shi database [6, 21] that consists of 568 color images of indoor and outdoor scenes,
captured under various illuminants. These images were obtained from RAW sensor images from a
camera employing the Bayer pattern with an anti-aliasing optical filter, by using the different color
measurements in each Bayer block to construct a single RGB pixel. These images are therefore at half
the resolution of the original sensor image, but have statistics that are representative of aliasing-free
full color images of typical natural scenes. Unlike [4] who only used 10 images for evaluation, we
use the entire dataset?using 56 images for testing, 461 images for training, and the remaining 51
images as a validation set to fix hyper-parameters.
We treat the images in the dataset as the ground truth for the output RGB images y(n). As sensor
measurements, we consider C = 4 possible color channels. The first three correspond to the original
sensor RGB channels. Like [4], we choose the fourth channel to be white or panchromatic, and
construct it as the sum of the RGB measurements. As mentioned in [4], this corresponds to a
conservative estimate of the light-efficiency of an unfiltered channel. We construct the C-channel
input image x(n) by including these measurements, followed by addition of different levels of
Gaussian noise, with high noise variances simulating low-light capture.
We learn a repeating pattern with P = 8. In our reconstruction network, we set the number of
proposals K for each output intensity to 24, and the number of convolutional layer outputs F in
the second path of our network to 128. When learning our sensor multiplexing pattern, we increase
the scalar soft-max factor ?t in (1) according to a quadratic schedule as ?t = 1 + (?t)2 , where
? = 2.5 ? 10?5 in our experiments. We train a separate reconstruction network for each noise level
(positing that a camera could select between these based on the ISO settings). However, since it is
impractical to employ different sensors for different settings, we learn a single spatial multiplexing
pattern, optimized for reconstruction under moderate noise levels with standard deviation (STD) of
0.01 (with respect to intensity values in x(n) scaled to be between 0 and 1).
We train our sensor layer and reconstruction network jointly at this noise level on sets of 8 ? 8
y(n) patches and corresponding 24 ? 24 x(n) patches sampled randomly from the training set. We
use a batch-size of 128, with a learning rate of 0.001 for 1.5 million iterations. Then, keeping the
sensor pattern fixed to our learned version, we train reconstruction networks from scratch for other
noise levels?training again with a learning rate of 0.001 for 1.5 million iterations, followed another
100,000 iterations with a rate of 10?4 . We also train reconstruction networks at all noise levels
in a similar way for the Bayer pattern, as well the pattern of [4] (with a color sampling rate of 4).
Moreover, to allow consistent comparisons, we re-train the reconstruction network for our pattern at
the 0.01 noise level from scratch following this regime.
4.1
Evaluating the Reconstruction Network
We begin by comparing the performance of our learned reconstruction networks to traditional
demosaicking algorithms for the standard Bayer pattern, and the pattern of [4]. Note that our goal
is not to propose a new demosaicking method for existing sensors. Nevertheless, since our sensor
5
It # 2,500
Entropy: 1.38
It # 5,000
Entropy: 1.38
It # 7,500
Entropy: 1.38
It # 10,000
Entropy: 1.38
It # 12,500
Entropy: 1.38
It # 25,000
Entropy: 1.37
It # 100,000
Entropy: 1.02
It # 200,000
Entropy: 0.78
It # 300,000
Entropy: 0.75
It # 400,000
Entropy: 0.82
It # 500,000
Entropy: 0.86
It # 600,000
Entropy: 0.85
It # 1,000,000
Entropy: 0.57
It # 1,100,000
Entropy: 0.37
It # 1,200,000
Entropy: 0.35
It # 1,300,000
Entropy: 0.25
It # 1,400,000
Entropy: 0.18
It # 1,500,0000
(Final)
Figure 2: Evolution of sensor pattern through training iterations. We find that the our network?s color
sensing pattern changes qualitatively through the training process. In initial iterations, the sensor
layer learns to sample color channels directly. As training continues, these color measurements are
replaced by panchromatic (white) pixels. The final iterations see fine refinements to the pattern. We
also report the mean (across pixels) entropy of the underlying distribution I(n) for each pattern. Note
that, as expected, this entropy decreases across iterations as the distributions I(n) evolve from being
soft selections of color channels, to zero-one vectors that make hard ordinal decisions.
Table 1: Median Reconstruction PSNR (dB) using Traditional demosaicking and Proposed Network
Traditional
Network
Bayer
Noise STD=0.0025 Noise STD=0.01
42.69
32.44
47.55
43.72
CFZ [4]
Noise STD=0.0025 Noise STD=0.01
48.84
39.55
49.08
44.64
pattern is being learned jointly with our proposed reconstruction architecture, it is important to
determine whether this architecture can learn to reason effectively with different kinds of sensor
patterns, which is necessary to effectively cover the joint sensor-inference design space.
We compare our learned networks to Zhang and Wu?s method [27] for the Bayer pattern, and
Chakrabarti et al.?s method [4] for their own pattern. We measure performance in terms of the
reconstruction PSNR of all non-overlapping 64 ? 64 patches from all test images (roughly 40,000
patches). Table 1 compares the median PSNR values across all patches for reconstructions using our
network to those from traditional methods, at two noise levels?low noise corresponding to an STD
of 0.0025, and moderate noise corresponding to 0.01. For the pattern of [4], we find that our network
performs similar to their reconstruction method at the low noise level, and significantly better at the
higher noise level. On the Bayer pattern, our network achieves much better performance at both noise
levels. We also note here that reconstruction using our network is significantly faster?taking 9s on a
six core CPU, and 200ms when using a Titan X GPU, for a 2.7 mega-pixel image. In comparison, [4]
and [27]?s reconstruction methods take 20s and 1 min. respectively on the CPU.
4.2
Visualizing Sensor Pattern Training
In Fig. 2, we visualize the evolution of our sensor pattern during the training process, while it is being
jointly learned with the reconstruction network. In the initial iterations, the sensor layers displays a
preference for densely sampling the RGB channels, with very few panchromatic measurements?in
fact, in the first row of Fig. 2, we see panchromatic pixels switching to color measurements. This
6
Figure 3: Example reconstructions from (noisy) measurements with different sensor multiplexing
patterns. Best viewed at higher resolution in the electronic version.
is likely because early on in the training process, the reconstruction network hasn?t yet learned to
exploit cross-channel correlations, and therefore needs to measure the output channels directly.
However, as training progresses, the reconstruction network gets more sophisticated, and we see the
number of color measurements get sparser and sparser, in favor of panchromatic pixels that offer the
advantage of higher SNR. Essentially, the sensor layer begins to adopt one of the design principles of
[4]. However, it distributes the color measurement sites across the pattern, instead of concentrating
them into separated blocks like [4]. In the last 500K iterations, we see that most changes correspond
to fine refinements of the pattern, with a few individual pixels swapping the channels they measure.
While the patterns themselves in Fig. 2 correspond to the channel at each pixel with the maximum
value in the selection map I(n), remember that these maps themselves are soft. Therefore, we also
report the mean entropy of the underlying I(n) for each pattern in Fig. 2. We see that this entropy
decreases across iterations, as the choice of color channel for more and more pixels becomes fixed,
with their distributions in I(n) becoming peakier and closer to being zero-one vectors.
4.3
Evaluating Learned Pattern
Finally, we evaluate the performance of neural network-based reconstruction from measurements
with our learned pattern, to those with the Bayer pattern and the pattern of [4]. Table 2 shows
different quantiles of reconstruction PSNR for various noise levels, with noise STDs raning from
0 to 0.04. Even though our sensor pattern was trained at the noise level of STD=0.01, we find it
achieves the highest reconstruction quality over a large range of noise levels. Specifically, it always
outperforms the Bayer pattern, by fairly significant margins at higher noise levels. The improvement
in performance over [4]?s pattern is less pronounced, although we do achieve consistently higher
PSNR values for all quantiles at most noise levels. Figure 3 shows examples of color patches
reconstructed from our learned sensor, and compare these to those from the Bayer pattern and [4].
We see that the reconstructions from the Bayer pattern are noticeably worse. This is because it
makes lower SNR measurements, and the reconstruction networks learn to smooth their outputs to
reduce this noise. Both [4] and our pattern yield significantly better reconstructions. Indeed, most of
our gains over the Bayer pattern come from choosing to make most measurements panchromatic, a
design principle shared by [4]. However, remember that our sensor layer learns this principle entirely
automatically from data, without expert supervision. Moreover, we see that [4]?s reconstructions
tend to have a few more instances of ?chromaticity noise?, in the form of contiguous regions with
incorrect hues, which explain its slightly lower PSNR values in Table 2.
7
Table 2: Network Reconstruction PSNR (dB) Quantiles for various CFA Patterns
Noise STD
0
0.0025
0.0050
0.0075
0.0100
0.0125
0.0150
0.0175
0.0200
0.0300
0.0400
5
Percentile
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
25%
50%
75%
Bayer [2]
47.62
51.72
54.97
44.61
47.55
50.52
42.55
45.63
48.73
41.34
44.48
47.77
40.58
43.72
47.10
40.29
43.36
46.65
39.97
43.03
46.25
39.60
42.62
45.82
39.31
42.39
45.56
38.18
41.17
44.23
37.14
39.98
43.17
CFZ [4]
48.04
52.17
55.32
46.05
49.08
51.57
44.33
47.01
49.68
42.92
45.60
48.41
41.97
44.64
47.56
41.17
43.88
47.04
40.54
43.29
46.69
40.03
42.83
46.39
39.49
42.39
46.14
38.31
41.48
45.61
37.43
40.86
45.11
Learned
47.97
52.12
55.30
46.08
49.17
51.76
44.37
47.19
49.94
43.08
45.85
48.69
42.16
44.94
47.80
41.41
44.22
47.27
40.85
43.69
46.86
40.31
43.12
46.45
39.96
42.78
46.23
38.92
41.85
45.63
38.00
41.02
44.98
Conclusion
In this paper, we proposed learning sensor design jointly with a neural network that carried out
inference on the sensor?s measurements, specifically focusing on the problem of finding the optimal
color multiplexing pattern for a digital color camera. We learned this pattern by joint training with
a neural network for reconstructing full color images from the multiplexed measurements. We
used a soft-max operation with an increasing temperature parameter to model the non-differentiable
color channel selection at each point, which allowed us to train the pattern effectively. Finally,
we demonstrated that our learned pattern enabled better reconstructions than past designs. An
implementation of our method, along with trained models, data, and results, is available at our project
page at http://www.ttic.edu/chakrabarti/learncfa/.
Our results suggest that learning measurement strategies jointly with computational inference is both
useful and possible. In particular, our approach can be used directly to learn other forms of optimized
multiplexing patterns?e.g., spatio-temporal multiplexing for video, viewpoint multiplexing in lightfield cameras, etc. Moreover, these patterns can be learned to be optimal for inference tasks beyond
reconstruction. For example, a sensor layer jointly trained with a neural network for classification
could be used to discover optimal measurement strategies for say, distinguishing between biological
samples using multi-spectral imaging, or detecting targets in remote sensing.
Acknowledgments
We thank NVIDIA corporation for the donation of a Titan X GPU used in this research.
8
References
[1] R. G. Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 2007.
[2] B. E. Bayer. Color imaging array. US Patent 3971065, 1976.
[3] H. C. Burger, C. J. Schuler, and S. Harmeling. Image denoising: Can plain neural networks
compete with BM3D? In Proc. CVPR, 2012.
[4] A. Chakrabarti, W. T. Freeman, and T. Zickler. Rethinking color cameras. In Proc. ICCP, 2014.
[5] M. Elad. Optimized projections for compressed sensing. IEEE Trans. Sig. Proc., 2007.
[6] P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesian color constancy revisited.
In Proc. CVPR, 2008.
[7] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. arXiv:1510.00149, 2015.
[8] J. Holloway, A. C. Sankaranarayanan, A. Veeraraghavan, and S. Tambe. Flutter shutter video
camera for compressive sensing of videos. In Proc. ICCP, 2012.
[9] T. Kaltzer, K. Hammernik, P. Knobelreiter, and T. Pock. Learning joint demosaicing and
denoising based on sequential energy minimization. In Proc. ICCP, 2016.
[10] O. Kapah and H. Z. Hel-Or. Demosaicking using artificial neural networks. In Electronic
Imaging, 2000.
[11] D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon. Joint demosaicing and denoising
via learned nonparametric random fields. IEEE Trans. Imag. Proc., 2014.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[13] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In Neural Networks: Tricks of
the trade. Springer, 1998.
[14] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional
camera with a coded aperture. In ACM Transactions on Graphics (TOG), 2007.
[15] X. Li, B. Gunturk, and L. Zhang. Image demosaicing: A systematic survey. In Proc. SPIE,
2008.
[16] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation.
In Proc. CVPR, 2015.
[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image
restoration. In Proc. ICCV, 2009.
[18] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: motion deblurring using
fluttered shutter. ACM Transactions on Graphics (TOG), 2006.
[19] C. J. Schuler, H. C. Burger, S. Harmeling, and B. Scholkopf. A machine learning approach for
non-blind image deconvolution. In Proc. CVPR, 2013.
[20] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated
recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013.
[21] L. Shi and B. Funt. Re-processed version of the Gehler color constancy dataset of 568 images.
2010. Accessed from http://www.cs.sfu.ca/~colour/data/.
[22] J. Sun and M. F. Tappen. Separable markov random field model and its applications in low level
vision. IEEE Trans. Imag. Proc., 2013.
[23] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. Dappled photography:
Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. 2007.
[24] X. Wang, D. Fouhey, and A. Gupta. Designing deep networks for surface normal estimation. In
Proc. CVPR, 2015.
[25] A. E. Waters, A. C. Sankaranarayanan, and R. Baraniuk. Sparcs: Recovering low-rank and
sparse matrices from compressive measurements. In NIPS, 2011.
[26] L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution.
In NIPS, 2014.
[27] L. Zhang and X. Wu. Color demosaicking via directional linear minimum mean square-error
estimation. IEEE Trans. Imag. Proc., 2005.
[28] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image
restoration. In Proc. ICCV, 2011.
9
| 6251 |@word version:5 inversion:1 compression:2 seek:2 tried:1 rgb:8 sgd:2 shot:1 carry:2 initial:2 liu:1 contains:1 interestingly:1 outperforms:2 existing:2 past:1 current:1 z2:1 comparing:1 yet:1 gpu:2 chicago:2 visible:2 informative:1 blur:2 periodically:1 cheap:1 designed:1 treating:1 drop:1 update:2 half:1 fewer:1 instantiate:1 parameterization:1 plane:3 iso:1 core:1 detecting:1 revisited:1 location:8 preference:1 zhang:4 positing:1 accessed:1 rc:2 along:3 become:2 zickler:1 chakrabarti:5 scholkopf:1 incorrect:1 consists:1 combine:1 manner:1 mask:1 expected:1 indeed:2 roughly:1 themselves:2 aliasing:8 multi:2 bm3d:1 freeman:2 automatically:4 cpu:2 increasing:2 becomes:2 spain:1 begin:2 moreover:8 underlying:2 project:1 discover:1 burger:2 aliased:2 lowest:1 kind:1 complimentary:1 compressive:4 informed:1 finding:2 corporation:1 impractical:1 temporal:2 remember:2 every:3 sapiro:1 act:1 k2:1 scaled:1 imag:3 pock:1 multiplex:1 treat:2 local:1 switching:1 encoding:1 path:6 interpolation:2 becoming:1 might:1 co:1 tambe:1 range:1 demosaicking:12 unique:1 camera:21 acknowledgment:1 testing:1 harmeling:2 lecun:2 block:5 fitzgibbon:1 flutter:1 evolving:1 cascade:3 significantly:6 projection:2 pre:1 suggest:1 get:2 selection:5 context:1 applying:2 optimize:1 www:2 map:7 demonstrated:1 missing:4 shi:2 conventional:1 layout:2 exposure:3 survey:1 resolution:5 sharpness:2 formulate:1 recovery:2 array:2 enabled:1 traditionally:1 target:1 today:1 enhanced:1 magazine:1 us:1 deblurring:2 mosaic:1 distinguishing:1 sig:1 trick:1 designing:1 recognition:1 tappen:1 continues:1 std:9 sparsely:2 coarser:1 database:1 gehler:3 bottom:2 constancy:2 wang:1 capture:2 parameterize:1 region:1 ensures:1 connected:2 compressing:1 sun:1 remote:1 trade:2 technological:1 decrease:2 highest:1 mentioned:1 intuition:2 trained:8 zoran:1 upon:2 tog:2 efficiency:1 localization:1 joint:8 various:4 train:8 separated:1 instantiated:1 peakier:2 effective:1 artificial:1 tell:1 bifurcated:2 hyper:1 choosing:1 whose:2 elad:1 valued:3 kenwood:1 say:1 cvpr:5 reconstruct:1 otherwise:2 compressed:1 favor:1 statistic:1 jointly:16 noisy:1 final:6 advantage:1 differentiable:3 agrawal:2 propose:4 reconstruction:56 neighboring:1 iccp:3 achieve:2 adapts:1 intuitive:1 pronounced:1 exploiting:1 sutskever:1 darrell:1 produce:6 cmos:1 converges:1 donation:1 propagating:3 measured:8 progress:2 strong:1 recovering:3 c:1 involves:2 come:1 filter:7 stochastic:1 exploration:1 centered:1 enable:3 noticeably:1 backprop:1 fix:1 biological:1 around:1 considered:1 ground:1 blake:1 visually:1 normal:1 visualize:1 achieves:3 dictionary:1 early:3 adopt:2 estimation:2 proc:15 label:3 successfully:1 weighted:1 minimization:1 sensor:55 gaussian:1 always:1 khashabi:1 modified:1 avoid:2 broader:1 encode:5 ponce:1 improvement:2 consistently:1 rank:1 contrast:1 ave:1 inference:16 typically:2 transferring:1 entire:3 integrated:1 pixel:20 arg:1 among:1 classification:4 retaining:1 overfeat:1 smoothing:1 spatial:4 fairly:2 field:5 construct:3 once:2 having:1 sampling:4 eliminated:1 placing:2 filling:1 report:3 piecewise:1 fouhey:1 employ:3 few:3 modern:1 randomly:1 simultaneously:1 densely:1 individual:2 replaced:1 consisting:1 detection:1 possibility:1 evaluation:1 light:8 swapping:1 bayer:21 edge:3 closer:1 necessary:1 re:2 increased:3 instance:1 soft:11 cover:1 contiguous:1 measuring:2 restoration:2 assignment:1 deviation:1 snr:3 krizhevsky:1 successful:2 levin:1 front:3 graphic:2 dependency:1 combined:2 systematic:1 off:1 together:2 squared:1 again:1 reconstructs:1 choose:2 slowly:1 worse:1 creating:2 expert:2 leading:1 li:1 account:1 orr:1 stride:1 sec:1 coding:1 availability:1 titan:2 hasn:1 caused:2 blind:1 multiplicative:2 dally:1 red:1 recover:1 capability:2 jia:1 square:1 il:1 accuracy:1 degraded:1 variance:1 largely:1 who:1 convolutional:5 yield:3 spaced:2 correspond:4 directional:1 raw:1 bayesian:1 ren:1 researcher:1 straight:1 maxc:1 explain:1 energy:2 acquisition:1 frequency:1 minka:1 spie:1 sampled:2 gain:2 stop:1 dataset:3 concentrating:1 color:68 improves:1 dimensionality:1 psnr:7 schedule:1 veeraraghavan:2 segmentation:1 sophisticated:2 back:4 focusing:1 feed:1 higher:7 follow:1 zisserman:1 wei:1 formulation:1 though:1 furthermore:1 correlation:1 overlapping:1 propagation:2 incrementally:1 widespread:1 quality:6 artifact:5 supervisory:1 effect:1 true:1 evolution:2 spatially:1 semantic:1 illustrated:2 white:2 chromaticity:3 visualizing:1 during:7 percentile:1 m:1 complete:1 demonstrate:1 performs:2 motion:2 temperature:3 passive:1 postprocessing:1 image:50 wise:1 photography:2 novel:1 recently:2 common:1 superior:1 physical:1 patent:1 million:2 extend:1 synthesized:1 interpret:1 measurement:48 significant:4 tuning:1 grid:1 had:1 thrice:1 han:2 longer:1 supervision:1 surface:1 etc:2 own:1 recent:2 moderate:2 driven:2 nvidia:1 durand:1 success:1 muller:1 captured:1 minimum:1 determine:2 signal:2 full:9 desirable:1 smooth:2 match:1 adapt:1 faster:1 cross:2 offer:1 posited:1 long:1 bach:1 post:1 coded:5 vision:2 essentially:1 funt:1 physically:2 iteration:16 arxiv:2 raskar:2 achieved:1 proposal:3 background:1 addition:2 ayan:1 fine:2 interval:1 huffman:1 median:2 operate:1 unlike:2 tend:1 db:2 regularly:3 leverage:2 presence:1 architecture:5 reduce:1 whether:2 six:1 shutter:2 colour:1 penalty:2 prefers:1 deep:7 hel:1 useful:1 tune:1 sparcs:1 repeating:2 nonparametric:1 hue:1 processed:2 reduced:1 generate:3 http:2 outperform:1 mega:1 blue:1 discrete:1 key:2 nevertheless:1 cfa:8 thresholded:1 luminance:3 imaging:5 sum:1 compete:1 fourth:1 baraniuk:2 reasonable:1 wu:2 electronic:2 patch:11 sfu:1 decision:5 entirely:1 layer:23 followed:4 display:1 quadratic:1 optic:1 constraint:1 scene:5 multiplexing:18 wc:1 min:1 demosaicing:3 separable:1 optical:2 according:1 combination:3 across:9 slightly:1 increasingly:1 reconstructing:2 making:4 iccv:2 computationally:4 remains:1 pin:1 r3:2 eventually:1 ordinal:5 fed:1 end:2 available:1 operation:3 spectral:3 simulating:1 alternative:2 batch:1 eigen:1 original:2 top:1 remaining:2 ccd:1 exploit:1 objective:1 strategy:8 receptive:1 traditional:8 gradient:4 refocusing:1 unable:1 separate:2 thank:1 rethinking:1 majority:1 evaluate:1 reason:3 water:1 consumer:1 rother:1 index:1 ratio:1 sermanet:1 design:18 implementation:1 convolution:1 markov:1 descent:1 anti:2 hinton:1 sharp:1 intensity:9 ttic:2 drift:1 introduced:2 pair:1 required:2 optimized:3 imagenet:1 engine:1 learned:15 barcelona:1 nip:4 trans:4 address:3 able:6 beyond:1 pattern:73 indoor:1 regime:2 sparsity:1 challenge:1 max:9 green:1 video:5 including:1 power:1 natural:4 rely:3 improve:2 technology:1 mathieu:1 carried:1 evolve:1 loss:4 fully:3 unfiltered:4 digital:2 validation:2 shelhamer:1 incident:4 consistent:1 principle:5 thresholding:1 viewpoint:1 nowozin:1 row:1 prone:1 placed:3 repeat:1 free:1 keeping:1 last:1 guide:1 allow:3 bias:2 institute:1 taking:1 sparse:5 boundary:1 depth:2 dimension:1 evaluating:3 plain:1 forward:1 commonly:1 disturbing:1 qualitatively:1 refinement:2 far:1 employing:1 transaction:2 reconstructed:4 pruning:1 aperture:3 keep:1 mairal:1 spatio:1 fergus:2 grayscale:1 un:1 search:1 table:5 learn:16 channel:35 schuler:2 ca:1 bottou:1 complex:1 interpolating:1 domain:1 whole:1 noise:29 allowed:1 xu:1 site:4 fig:6 representative:1 quantiles:3 sub:1 mao:1 lie:1 outdoor:1 tied:1 toyota:1 weighting:1 learns:5 specific:3 gating:1 learnable:3 sensing:8 explored:1 gupta:1 deconvolution:2 sankaranarayanan:2 quantization:1 sequential:1 effectively:3 texture:1 magnitude:1 mohan:1 illustrates:1 hole:2 margin:1 sparser:2 easier:1 entropy:22 likely:1 visual:1 scalar:2 ayanc:1 springer:1 corresponds:3 truth:1 determines:2 acm:2 succeed:1 goal:2 viewed:1 towards:1 replace:1 shared:1 content:1 hard:6 change:4 jancsary:1 specifically:4 typical:2 operates:3 determined:1 denoising:5 distributes:1 conservative:1 total:2 select:2 holloway:1 latter:1 noisier:1 illuminant:1 multiplexed:1 scratch:2 correlated:1 |
5,805 | 6,252 | Optimal Sparse Linear Encoders and Sparse PCA
Malik Magdon-Ismail
Rensselaer Polytechnic Institute, Troy, NY 12211
[email protected]
Christos Boutsidis
New York, NY
[email protected]
Abstract
Principal components analysis (PCA) is the optimal linear encoder of data. Sparse
linear encoders (e.g., sparse PCA) produce more interpretable features that can
promote better generalization. (i) Given a level of sparsity, what is the best approximation to PCA? (ii) Are there efficient algorithms which can achieve this
optimal combinatorial tradeoff? We answer both questions by providing the first
polynomial-time algorithms to construct optimal sparse linear auto-encoders; additionally, we demonstrate the performance of our algorithms on real data.
1
Introduction
The data matrix is X ? Rn?d (a row xiT ? R1?d is a data point in d dimensions). auto-encoders
transform (encode) the data into a low dimensional feature space and then lift (decode) it back to
the original space, reconstructing the data through a bottleneck. If the reconstruction is close to the
original, then the encoder preserved most of the information using just a small number of features.
Auto-encoders are important in machine learning because they perform information preserving dimension reduction. Our focus is the linear auto-encoder, which, for k < d, is a pair of linear
mappings h : Rd 7? Rk and g : Rk 7? Rd , specified by an encoder matrix H ? Rd?k and a decoder
matrix G ? Rk?d . For data point x ? Rd , the encoded feature is z = h(x) = H T x ? Rk and the
? = XHG. The
? = g(z) = G T z ? Rd . The reconstructed data matrix is X
reconstructed datum is x
?
pair (H, G) is a good auto-encoder if X ? X under some loss metric (we use squared loss):
Definition 1 (Loss ?(H, X)). The loss of encoder H is the minimum possible Frobenius reconstruction error (over all linear decoders G) when using H as encoder for X:
2
2
?(H, X) = minG?Rk?d kX ? XHGkF = kX ? XH(XH)? XkF .
The loss is defined for an encoder H alone, by choosing the decoder optimally. The literature
considers primarily the symmetric auto-encoder which places the additional restriction that G = H?
[18]. To get the most useful features, one should not place unnecessary constraints on the decoder.
Principal Component Analysis (PCA) is the most famous linear auto-encoder, because it is optimal
2
(and symmetric). Since rank(XHG) ? k, the loss is bounded by ?(H, X) ? kX ? Xk kF (Xk is
T
its best rank-k approximation to X). By the Eckart-Young theorem Xk = XVk Vk , where Vk ?
Rd?k is the matrix whose columns are the top-k right singular vectors of X (see, for example, eChapter 9 of [14]). Thus, the optimal linear encoder is Hopt = Vk , and the top-k PCA-features are
Zpca = XVk . Since its early beginings, [19], PCA has evolved into a classic tool for data analysis.
While PCA simplifies the data by concentrating the maximum information into a few components,
those components may be hard to interpret. In many applications, it is desirable to ?explain? the
features using a few original variables (for example, genes in a biological application or assets in
a financial application). One trades off the fidelity of the features (their ability to reconstruct the
data) with the interpretability of the features using a few original features (sparsity of the encoder
H). We introduce a sparsity parameter r and require that every column of H have at most r non-zero
elements. Every feature in an r-sparse encoding can be ?explained? using at most r original features.
We now formally state the sparse linear encoder problem:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Problem 1: Optimal r-sparse encoder (Sparse PCA)
Given X ? Rn?d , ? > 0 and k < rank(A), find an r-sparse encoder H with minimum r,
for which the loss is a (1 + ?) relative approximation to the optimal loss,
2
2
?(H, X) = kX ? XH(XH)? XkF ? (1 + ?)kX ? Xk kF .
Our Contributions. First, we are proposing the ?sparse-PCA? problem defined above in lieu of
traditional sparse-PCA which is based on maximizing variance, not minimizing loss. With no sparsity constraint, variance maximization and loss minimization are equivalent, both being solved by
PCA. Historically, variance maximization became the norm for sparse-PCA. However, minimizing
loss better achieves the machine learning goal of preserving information. The table below compares
the 10-fold cross-validation error Eout for an SVM classifier using features from popular variance
maximizing sparse-PCA encoders and our loss minimizing sparse-encoder (k = 6 and r = 7),
d
SVM
Eout
ARCENE 104 0.44
5000 0.44
Gisette
Madelon 500 0.51
?1? vs ?5? 256 0.35
SECOM 691 0.07
Spam
57
0.17
T-Power [23]
G-Power-?0 [10] G-Power-?1
Eout Loss Var Eout Loss Var Eout Loss
matrix to large 0.325 2.5 0.01 0.35 2.5
0.45 1.17 0.1 0.49 1.2 0.02 0.50 1.2
0.46 1.3 0.09 0.33 1.08 0.08 0.46 1.33
0.30 2.4 0.21 0.34 2.28 0.27 0.33 2.3
0.34 1.3 0.96 0.35 2.9 0.79 0.33 2.9
0.20 1.00 1.0 0.22 1.03 1.0 0.20 1.02
[10]
Var
0.01
0.02
0.05
0.19
0.79
1.0
Our Algorithm
Eout Loss Var
0.29 1.4 0.005
0.31 1.1 0.02
0.24 1.07 0.03
0.01 1.2 0.03
0.31 1.0 0.46
0.21 1.02 1.0
Lower loss gives lower error. Our experiments are not exhaustive, but their role is modest: to motivate minimizing loss as the right machine learning objective for sparse encoders (Problem 1).
Our main contribution is polynomial sparse encoder algorithms with theoretical guarantees that
solve Problem 1. We give a (1+?)-optimal encoder with sparsity O(k/?) (Theorem 7). This sparsity
cannot be beaten by any algorithm that guarantees a (1 + ?)-approximation (Theorem 8). Ours is the
first theoretical guarantee for a k-component sparse linear encoder with respect to the optimal PCA.
Our algorithm constructs sparse PCA features (columns of the encoder H) which preserve almost
as much information as optimal PCA-features. Our second technical contribution (Theorem 11)
is an algorithm to construct sparse features iteratively (typical of sparse-PCA algorithms). Iterative
algorithms are notoriously hard to analyze, and we give the first theoretical guarantees for an iterative
sparse encoder. (Detailed proofs are postponed to a full version.)
Notation. Let ? ? min{n, d} = rank(X) (typically ? = d). We use A, B, C, . . . for matrices and
a, b, c, . . . for vectors. The Euclidean basis is e1 , e2 , . . . (dimension can inferred from context). For
an n ? d matrix X, the singular value decomposition (SVD) gives X = U?V T , where the columns
of U ? Rn?? are the left singular vectors, the columns of V ? Rd?? are the right singular vectors,
and ? ? R??? is the positive diagonal matrix of singular values ?1 ? ? ? ? ? ?? ; U and V are
orthonormal, U T U = V T V = I? . For integer k, we use Uk ? Rn?k (resp. Vk ? Rd?k ) for the
first k left (resp. right) singular vectors, and ?k ? Rk?k is the diagonal matrix with the top-k
singular values. We can view a matrix as a row of columns. So, X = [f1 , . . . , fd ], U = [u1 , . . . , u? ],
V = [v1 , . . . , v? ], Uk = [u1 , . . . , uk ] and Vk = [v1 , . . . , vk ]. We use f for the columns of X, the
features, and we reserve xi for the data points (rows of X), X T = [x1 , . . . , xn ]. A = [a1 , . . . , ak ]
is (r1 , . . . , rk )-sparse if kai k0 ? ri ; if all ri are equal to r, we say the matrix is r-sparse. The
P
2
2
T
T
Frobenius (Euclidean) norm of a matrix A is kAkF =
ij A ij = Tr(A A) = Tr(AA ). The
?
T
T
pseudo-inverse A? of A with SVD U A ?A V TA is A? = V A ??1
A U A ; AA = U A U A is a symmetric
T
projection operator. For matrices A, B with A B = 0, a generalized Pythagoras theorem holds,
2
2
2
kA + BkF = kAkF + kBkF . kAk2 is the operator norm (top singular value) of A.
Discussion of Related Work. PCA is the optimal (and most popular) linear auto-encoder. Nonlinear auto-encoders became prominent with auto-associative neural networks [7, 3, 4, 17, 18]. There
is some work on sparse linear auto-encoders (e.g. [15]) and a lot of research on ?sparse PCA?. The
importance of sparse factors in dimensionality reduction has been recognized in some early work:
the varimax criterion [11] has been used to rotate the factors to encourage sparsity, and this has
been used in multi-dimensional scaling approaches to dimension reduction [20, 12]. One of the first
attempts at sparse PCA used axis rotations and component thresholding [6]. The traditional formulation of sparse PCA is as cardinality constrained variance maximization: maxv v T Av subject to
v T v = 1 and kvk0 ? r, which is NP-hard [14]. The exhaustive algorithm requires O dr2 ( dr )
2
computation which can be improved to O(dq+1 ) for a rank-q perturbation of the identity [2]. These
algorithms are not practical. Several heuristics exist. [22] and [24] take an L1 penalization view.
DSPCA (direct sparse PCA) [9] also uses an L1 sparsifier but solves a relaxed convex semidefinite program which is further refined in [8] where they also give a tractable sufficient condition for
testing optimality. The simplest algorithms use greedy forward and backward subset selection. For
example, [16] develop a greedy branch and bound algorithm based on spectral bounds with O(d3 )
running time for forward selection and O(d4 ) running time for backward selection. An alternative
view of the problem is as a sparse matrix reconstruction problem; for example [21] obtain sparse
principal components using regularized low-rank matrix approximation. Most existing SPCA algorithms find one sparse principal component. One applies the algorithm iteratively on the residual
after projection to get additional sparse principal components [13].
There are no polynomial algorithms with optimality guarantees. [1] considers sparse PCA with
a non-negativity constraint: they give an algorithm with input parameter k and running time
O(dk+1 log d + dk r3 ) which constructs a sparse component within (1 ? nr kX ? Xk k2 /kXk2 ) from
optimal. The running time is not practical when k is large; further, the approximation guarantee
relies on rapid spectral decay of X and only applies to the first component, not to further iterates.
Explained Variance vs. Loss. For symmetric auto-encoders, minimizing loss is equivalent to
2
maximizing the symmetric explained variance kXHH? kF due to the identity
2
2
2
2
var(X) = kXkF = kX(I ? HH? ) + XHH? kF = kX ? XHH? kF + kXHH? kF
(the last equality is from Pythagoras? theorem). The PCA auto-encoder is symmetric, V?k = VkT .
So the optimal encoder for maximum variance or minimum loss are the same: PCA. But, when it
comes to approximation, an approximation algorithm for loss can be converted to an approximation
algorithm for variance maximization (the reverse is not true).
2
2
2
2
Theorem 2. If kX ? XHH? kF ? (1 + ?)kX ? Xk kF , then kXHH? kF ? 1 ? ??k
k ? kX k kF .
When factors are not decorrelated, explained variance is not well defined [24], whereas loss is well
defined for any encoder. Minimizing loss and maximizing the explained variance are both ways of
encouraging H to be close to Vk . However, when H is constrained (for example to be sparse), these
optimization objectives can produce very different solutions. From the machine learning perspective,
symmetry is an unnecessary constaint on the auto-encoder. All we want is an encoder that produces
a compact representation of the data while capturing as much information as possible.
2
Optimal Sparse Linear Encoder
We show a black-box reduction of sparse encoding to the column subset selection problem (CSSP).
We then use column subset selection algorithms to construct provably accurate sparse auto-encoders.
For X = [f1 , . . . , fd ], we let C = [fi1 , fi2 . . . , fir ] denote a matrix formed using r columns ?sampled?
from X, where 1 ? i1 < i2 ? ? ? < ir ? d are distinct column indices. We can use a matrix ? ? Rd?r
to perform the sampling, C = X?, where ? = [ei1 , ei2 . . . , eir ] and ei are the standard basis vectors
in Rd (post-multiplying X by ei ?samples? the ith column of X). The columns of C span a subspace
in the range of X. A sampling matrix can be used to construct an r-sparse matrix.
Lemma 3. Let ? = [ei1 , ei2 . . . , eir ] and let A ? Rr?k be any matrix. Then ?A is r-sparse.
Define X C = CC? X, the projection of X onto the column space of C. Let X C,k ? Rn?d be the
optimal rank-k approximation to X C obtained via the SVD of X C .
Lemma 4 (See, for example, [5]). X C,k is a rank-k matrix whose columns are in the span of C. Let
? be any rank-k matrix whose columns are in the span of C. Then, kX ? X C,k k ? kX ? Xk
? .
X
F
F
That is, X C,k is the best rank-k approximation to X whose columns are in the span of C. An efficient
algorithm to compute X C,k is also given in [5]. The algorithm runs in O(ndr + (n + d)r2 ) time.
2.1
Sparse Linear Encoders from Column Subset Selection
We show that a set of columns C for which X C,k is a good approximation to X can produce a good
sparse linear encoder. In the algorithm below we assume (not essential) that C has full column rank.
The algorithm uses standard linear algebra operations and has total runtime in O(ndr + (n + d)r2 ).
3
Blackbox algorithm to compute encoder from CSSP
Inputs: X ? Rn?d ; C ? Rn?r with C = X? and ? = [ei1 , . . . , eir ]; k ? r.
Output: r-sparse linear encoder H ? Rd?k .
1: Compute a QR-factorization of C as C = QR, with Q ? Rn?r , R ? Rr?r .
2: Obtain the SVD of R?1 (Q T X)k , R?1 (Q T X)k = U R ?R V TR .
(U R ? Rr?k , ?R ? Rk?k and V R ? Rd?k )
3: Return H = ?U R ? Rd?k .
In step 2, even though R?1 (Q T X)k is an r ? d matrix, it has rank k, hence the dimensions of
U R , ?R , V R depend on k, not r. By Lemma 3, the encoder H is r-sparse. Also, H has orthonormal
columns, as is typically desired for an encoder (H T H = U TR ?T ?U R = U TR U R = I). In every column
of our encoder, the non-zeros are at the same r coordinates which is much stronger than r-sparsity.
The next theorem shows that our encoder is good if C contains a good rank-k approximation X C,k .
Theorem 5 (Blackbox encoder from CSSP). Given X ? Rn?d , C = X? ? Rn?r with ? =
[ei1 , . . . , eir ] and k ? r, let H be the r-sparse linear encoder produced by the algorithm above in
O(ndr + (n + d)r2 ) time. Then, the loss satisfies
2
2
?(H, X) = kX ? XH(XH)? XkF ? kX ? X C,k kF .
The theorem says that if we can find a set of r columns within which a good rank-k approximation
to X exists, then we can construct a good sparse linear encoder. What remains is to find a sampling
2
matrix ? which gives a good set of columns C = X? for which kX ? X C,k kF is small. The main
tool to obtain C and ? was developed in [5] which gave a constant factor deterministic approximation algorithm and a relative-error randomized approximation algorithm. We state a simplified form
of the result and then discuss various ways in which this result can be enhanced. Any algorithm to
construct a good set of columns can be used as a black box to get a sparse linear encoder.
Theorem 6 (Near-optimal CSSP [5]). Given X ? Rn?d of rank ? and target rank k:
(i) (Theorem 2 in [5]) For sparsity parameter r > k, there is a deterministic algorithm which
runs in time TVk + O(ndk + dk 3 ) to construct a sampling matrix ? = [ei1 , . . . , eir ] and
corresponding columns C = X? suchthat
p
2
2
kX ? X C,k kF ? 1 + (1 ? k/r)?2 kX ? Xk kF .
(ii) (Simplified Theorem 5 in [5]) For sparsity parameter r > 5k, there is a randomized algorithm
which runs in time O(ndk + dk 3 + r log r) to construct a sampling matrix ? = [ei1 , . . . , eir ]
and corresponding columns C = X? such that
i
h
5k
2
2
kX ? Xk kF .
E kX ? X C,k kF ? 1 +
r ? 5k
Our ?batch? sparse linear encoder uses Theorem 6 in our black-box CSSP-encoder.
?Batch? Sparse Linear Encoder Algorithm
Inputs: X ? Rn?d ; rank k ? rank(X); sparsity r > k.
Output: r-sparse linear encoder H ? Rd?k .
1: Use Theorem 6-(ii) to compute columns C = X? ? Rn?r , with inputs X, k, r.
2: Return H computed with X, C, k as input to the CSSP-blackbox encoder algorithm.
Using Theorem 6 in Theorem 5, we have an approximation guarantee for our algorithm.
Theorem 7 (Sparse Linear Encoder). Given X ? Rn?d of rank ?, the target number of sparse PCA
vectors k ? ?, and sparsity parameter r > 5k, the ?batch? sparse linear encoder algorithm above
runs in time O(ndr + (n + d)r2 + dk 3 ) and constructs an r-sparse encoder H such that:
i
h
5k
2
2
E kX ? XH(XH)? XkF ? 1 +
kX ? Xk kF .
r ? 5k
Comments. The expectation is over the random choices in the algorithm, and the bound can be
boosted to hold with high probability or even deterministically. 2. The guarantee is with respect to
2
kX ? Xk kF (optimal dense PCA): sparsity r = O(k/?) suffices to mimic top-k (dense) PCA.
4
We now give the lower bound on sparsity, showing that our result is worst-case optimal. Define the
row-sparsity of H as the number of its rows that are non-zero. When k=1, the row-sparsity equals
the sparsity of the single factor. The row-sparsity is the total number of dimensions which have nonzero loadings among all the factors. Our algorithm produces an encoder with row-sparsity O(k/?)
and comes within (1 + ?) of the minimum possible loss. This is worst case optimal:
Theorem 8 (Lower Bound). There is a matrix X for which any linear encoder that achieves a
(1 + ?)-approximate loss as compared to PCA must have a row-sparsity r ? k/?.
The common case that in the literature is with k = 1 (top sparse component). Our lower bound
shows that ?(1/?)-sparsity is required and our algorithm asymptotically achieves this lower bound.
To prove Theorem 8, we show the converse of Theorem 5: a good linear auto-encoder with rowsparsity r can be used to construct r columns C for which X C,k approximates X.
Lemma 9. Suppose H is a linear encoder for X with row-sparsity r and decoder G. Then, BHG =
CY, where C is a set of r columns of X and Y ? Rr?d .
Given Lemma 9, a sketch of the rest of the argument is as follows. Section 9.2 of [5] demonstrates
a matrix for which there do not exist r good columns. Since a good r-sparse encoder gives r good
columns, no r-sparse encoder can be (1 + k/r)-optimal: No linear encoder with row-sparsity r
achieves a loss within (1 + k/r) of PCA. Our construction is asymptotically worst case optimal.
The lower bound holds for general linear auto-encoders, and so this lower bound also applies to
the symmetric auto-encoder HH? , the traditional formulation of sparse PCA. When k = 1, for any
2
2
r-sparse unit norm v, there exists X for which kX ? Xvv T kF ? (1 + 1r )kX ? X1 kF , or in terms of
2
2
the symmetric explained variance v T B T Bv ? kB1 kF ? 1r kB ? B1 kF .
3
Iterative Sparse Linear Encoders
Our CSSP-based algorithm is ?batch? in that all k factors are constructed simultaneously. Every
feature in the encoder is r-sparse with non-zero loadings on the same set of r original dimensions;
and, you cannot do better with a row-sparsity of r. Further, the batch algorithm does not distinguish
between the k-factors. That is, there is no top component, second component, and so on.
The traditional techniques for sparse PCA construct the factors iteratively. We can too: run our batch
algorithm in an iterative mode, where in each step we set k = 1 and compute a sparse factor for a
residual matrix. By constructing our k features iteratively (and adaptively), we identify an ordering
among the k features. Further, we might be able to get each feature sparser while still maintaining a
bound on the row-sparsity. We now give an iterative version of our algorithm. In each iteration, we
augment H by computing a top sparse encoder for the residual obtained using the current H.
Iterative Sparse Linear Encoder Algorithm
Inputs: X ? Rn?d ; rank k ? rank(X); sparsity parameters r1 , . . . , rk .
Output: (r1 , . . . , rk )-sparse linear encoder
1: Set the residual ? = X and H = [ ].
2: for i = 1 to k do
3:
Use the batch algorithm to compute encoder h for ?, with k = 1 and r = ri .
4:
Add h to the encoder: H ? [H, h].
5:
Update the residual ?: ? ? X ? XH(XH)? X.
6: Return the (r1 , . . . , rk )-sparse encoder H ? Rn?k .
The next lemma bounds the reconstruction error for this iterative step in the algorithm.
Lemma 10. Suppose, for k ? 1, Hk = [h1 , . . . , hk ] is an encoder for X, satisfying
2
kX ? XHk (XHk )? XkF = err.
Given a sparsity r > 5 and ? ? 5/(r ? 5), one can compute in time O(ndr + (n + d)r2 ) an r-sparse
feature hk+1 such that the reconstruction error of the encoder Hk+1 = [h1 , . . . , hk , hk+1 ] satisfies
i
h
2
2
= (1 + ?)(err ? kX ? XHk (XHk )? Xk2 ).
E kX ? XHk+1 (XHk+1 )? XkF
5
Lemma 10 gives a bound on the reconstruction error for an iterative addition of the next sparse
encoder vector. To see how Lemma 10 is useful, consider target rank k = 2. First construct h1 with
2
sparsity r1 = 5 + 5/?, which gives (1 + ?)kX ? X1 kF loss. Now construct h2 , also with sparsity
r2 = 5 + 5/?. The loss for H = [h1 , h2 ] is bounded by
2
2
?(H, X) ? (1 + ?)2 kX ? X2 kF + ?(1 + ?)kX ? X1 k2 .
On the other hand, our batch algorithm uses sparsity r = 10 + 10/? in each encoder h1 , h2 and
2
achieves reconstruction error (1 + ?)kX ? X2 kF . The iterative algorithm uses sparser features,
but pays for it a little in reconstruction error. The second term is small, O(?), and depends on
2
2
kX ? X1 k2 = ?22 , which in practice is smaller than kX ? X2 kF = ?32 + ? ? ? + ?d2 . Using the
iterative algorithm, we can tailor the sparsity of each encoder vector separately to achieve a desired
accuracy. It is algebraically intense to prove a bound for a general choice of the sparsity parameters
r1 , . . . , rk , so for simplicity, we prove a bound for a specific choice of the sparsity parameters which
slowly increase with each iterate. The proof idea is similar to our example with k = 2.
Theorem 11 (Iterative Encoder). Given X ? Rn?d of rank ? and k < ?, set rj = 5 + ? 5j/? ? in
our iterative encoder to compute the (r1 , . . . , rk )-sparse encoder H = [h1 , h2 , . . . , hk ]. Then, for
every ? = 1, . . . , k, the encoder H = [h1 , h2 , . . . , hk ] has reconstruction error
h
i
2
2
2
E kX ? XH? (XH? )? XkF
? (e?)? kX ? X? kF + ??1+? kX? ? X1 kF .
(1)
The running time to compute all the encoder vectors is O(ndk 2 ??1 + (n + d)k 3 ??2 ).
Comments. This is the first theoretical guarantee for iterative sparse encoders. Up to a small additive
term, we have a relative error approximation because (e?)? = 1 + O(? log ?) grows slowly with ?.
Each successive encoder vector has a larger sparsity (as opposed to a fixed sparsity r = 5k + 5k/?
in the batch algorithm). If we used a constant sparsity rj = 5 + 5k/? for every encoder vector in
the iterative algorithm, the relative error term becomes 1 + O(??) as opposed to 1 + O(? log ?). Just
as with the PCA vectors v1 , v2 , . . ., we have a provably good encoder for any ? by taking the first ?
factors h1 , . . . , h? . In the batch-encoder H = [h1 , . . . , hk ], we cannot guarantee that h1 will give a
reconstruction comparable with X1 . The detailed proof is in the supplementary matrrial.
Proof. (Sketch) For ? ? 1, we define two quantities Q? , P? for that will be useful in the proof.
Q?
=
P? = Q ? ? 1 =
(1 + ?)(1 + 12 ?)(1 + 13 ?)(1 + 41 ?) ? ? ? (1 + 1? ?);
(1 + ?)(1 + 21 ?)(1 + 13 ?)(1 + 41 ?) ? ? ? (1 + 1? ?) ? 1.
Using Lemma 10 and induction, we can prove a bound on the loss of H? .
?
i
h
X
Pj?1
2
2
.
E kX ? XH? (XH? )? XkF ? Q? kX ? X? kF + Q?
?j2
Qj?1
j=2
2
(?)
2
When ? = 1, the claim is that E[kX ? XH1 (XH1 )? XkF ] ? (1+?)kX ? X1 kF (since the summation
is empty), which is true by construction of H1 = [h1 ] because r1 = 5 + 5/?. For the induction
2
step, we apply Lemma 10 with ? = ?/(? + 1), condition on err = kX ? XH? (XH? )? XkF whose
expectation is given in (?), and use iterated expectation. The details are given in the supplementary
material. The first term in the bound (1) follows by bounding Q? using elementary calculus:
?
?
X
X
?
?
log 1 +
log Q? =
? ? log(e?),
?
i
i
i=1
i=1
where we used log(1 + x) ? x for x ? 0 and the well known upper bound log(e?) for the ?th
harmonic number 1 + 21 + 31 + ? ? ? + 1? . Thus, Q? ? (e?)? . The rest of the proof is to bound the
second term in (?) to obtain the second term in (1). Obeserve that for i ? 1,
Qi
Qi
Qi
Qi
+
?1??
+ Qi?1 ? 1 = ?
+ Pi?1 ,
Pi = Q i ? 1 = ?
Q1
Q1
Q1
Q1
where we used Qi /Q1 ? Qi?1 and we define P0 = 0. After some algebra which we omit,
?
?
X
X
Pj?1
?
Pj?2
2
?j2
?j2
?
kX? ? X1 kF +
.
Q
Q
Q
j?1
1
j?2
j=2
j=3
6
PitProps
Batch
Iterative
Tpower
Gpower?0
Gpower?1
Information Loss
1.6
1.2
1.1
1.5
1.5
1.4
1.3
1.2
6
Sparsity r
8
1
5
10
10
15
PitProps
20
Sparsity r
25
Symmetric Variance
Symmetric Variance
0.6
0.5
Batch
Iterative
Tpower
Gpower?0
Gpower?1
0.4
0.3
4
6
Sparsity r
8
20
30
40
Colon
0.3
Batch
Iterative
Tpower
Gpower?0
Gpower?1
0.4
0.2
0.1
0.3
0.2
0.1
0
5
10
10
Sparsity r
Batch
Iterative
Tpower
Gpower?0
Gpower?1
0.9
0.7
1.2
0.5
0.4
0.8
1.3
1
35
Lymphoma
1
0.2
2
30
Symmetric Variance
4
1.4
1.1
1.1
1
2
Batch
Iterative
Tpower
Gpower?0
Gpower?1
1.6
Information Loss
Batch
Iterative
Tpower
Gpower?0
Gpower?1
1.3
Information Loss
Colon
Lymphoma
1.4
10
15
20
Sparsity r
25
30
0
35
10
20
30
40
Sparsity r
Figure 1: Performance of the sparse encoder algorithms on the PitProps data (left), Lymphoma data (middle)
and Colon data (right) data: We Information loss (top) and symmetric explained variance (bottom) with k = 2.
Our algorithms give the minimum information loss which decreases inversely with r as the theory predicts. It
is no surprise that existing sparse-PCA algorithms do better at maximizing symmetric explained variance.
Using this reduction it is now an elementary task to prove by induction that
??1
?
X
? X
Pj?1
2
kX? ? Xj kF .
?
?j2
Q
Q
j?1
1
j=1
j=2
2
2
Since kX? ? Xj kF ? kX? ? X1 kF (? ? j)/(? ? 1), we have that
?
2 ??1
2
X
?kX? ? X1 kF X
??kX? ? X1 kF
Pj?1
?j2
?
.
??j =
Qj?1
Q1 (? ? 1) j=1
2Q1
j=2
Using (?), we have that
2
2
2
kX ? XH? (XH? )? XkF ? (e?)? kX ? X? kF +
??kX? ? X1 kF Q?
?
.
2
Q1
The result finally follows because
?
?
X
X
Q?
1
?
log
??
log 1 +
=
? ?(log(e?) ? 1) = ? log ?,
Q1
i
i
i=2
i=2
and so Q? /Q1 ? ?? .
4
Demonstration
We empirically demonstrate our algorithms against existing state-of-the-art sparse PCA methods.
The inputs are X ? Rn?d , the number of components k and the sparsity parameter r. The output
is the sparse encoder H = [h1 , h2 , . . . , hk ] ? Rn?k with khi k0 ? r; H is used to project X onto
? which decomposes the variance into two terms:
some subspace to obtain a reconstruction X
2
2
? + kXk
? 2 = Loss + Explained Variance
kXk = kX ? Xk
F
F
F
For symmetric auto-encoders, minimizing loss is equivalent to maximizing the symmetric explained
variance, the path traditional sparse-PCA takes,
2
2
Symmetric Explained Variance = kXHH? kF /kXk kF ? 1
To capture how informative the sparse components are, we can use the normalized loss:
2
2
Loss = kX ? XH(XH)? XkF /kX ? Xk kF ? 1.
7
We report the symmetric explained variance primarily for historical reasons because existing sparse
PCA methods have constructed auto-encoders to optimize the symmetric explained variance.
We implemented an instance of the sparse PCA algorithm of Theorem 7 with the deterministic
technique described in part (i) in Theorem 6. (This algorithm gives a constant factor approximation,
as opposed to the relative error approximation of the algorithm in Theorem 7, but it is deterministic
and simpler to implement.) We call this the ?Batch? sparse linear auto-encoder algorithm. We
correspondingly implement an ?Iterative? version with fixed sparsity r in each principal component.
In each step of the iterative sparse auto-encoder algorithm we use the above batch algorithm to select
one principal component with sparsity at most r.
We compare our to the following state-of-the-art sparse PCA algorithms: (1) T-Power: truncated
power method [23]. (2) G-power-?0 : generalized power method with ?0 regularization [10]. (3)
G-power-?1 : generalized power method with ?1 regularization [10]. All these algorithms were
designed to operate for k = 1 (notice our algorithms handle any k) so to pick k components, we use
the ?deflation? method suggested in [13]. We use the same data sets used by these prior algorithms
(all available in [23]): PitProps (X ? R13?13 ); Colon (X ? R500?500 ); Lymphoma (X ? R500?500 ).
Iter.
The qualitative results for different k are simi- Batch
h1 h2
lar so we only show k = 2 in Figure 1. The h1 h2
0
0
0
0
take-away is that loss and symmetric variance give
?0.8
very different sparse encoders (example encoders ?0.80 ?0.30 ?0.60 ?0.4
[h1 , h2 ] with r = 5 are shown on the right).
0 ?0.8
0 ?0.2
This underlines why the correct objective is im0
0
0
0
0
0
0
0
portant. The machine learning goal is to preserve
0
0 ?0.7 ?0.1
as much information as possible, which makes
loss the compeling objective. The figures show ?0.30 0.30 ?0.30 00
that as r increases, our algorithms deliver near0
0 ?0.1
0
optimal 1 + O(1/r) normalized loss, as the theory
0
0
0
0
0
0
0 ?0.2
guarantees. The ?iterative? algorithm has better
0.5 ?0.4
0
0
empirical performance than the batch algorithm.
TP
h1 h2
0.5
0
0.5
0
0 0.6
0 0.6
0
0
0 0.3
0.4
0
0
0
0.4
0
0.4 ?0.2
0
0
0 0.3
0
0
GP-?0
h1 h2
0.7
0
0.7
0
0 0.7
0 0.7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
GP-?1
h1 h2
0.6
0
0.6
0
0 0.7
0 0.7
0
0
0
0
0
0
0
0
0.5
0
0
0
0
0
0
0
0
0
Summary. Loss minimization and variance maximization give very different encoders under a sparsity constraint. The empirical performance of our loss minimization algorithms follows the theory.
Our iterative algorithm is empirically better though it has a slightly worse theoretical guarantee.
5
Discussion
Historically, sparse PCA was cardinality constrained variance maximization. Variance per se has
no intrinsic value, and is hard to define for non-orthogonal or correlated encoders, which is to be
expected once you introduce a sparsity constraint. Our definition of loss is general and captures the
machine learning goal of preserving as much information as possible.
We gave theoretical guarantees for sparse encoders. Our iterative algorithm has a weaker bound
than our batch algorithm, yet the iterative algorithm is better empirically. Iterative algorithms are
tough to analyze, and it remains open whether a tighter analysis can be given. We conjecture that
the iterative algorithm is as good or better than the batch algorithm, though proving it seems elusive.
Finally, we have not optimized for running times. Considerable speed-ups may be possible without
sacrificing accuracy. For example, in the iterative algorithm (which repeatedly calls the CSSP algorithm with k = 1), it should be possible significantly speed up the generic algorithm (for arbitrary
k) to a specialized one for k = 1. We leave such implementation optimizations for future work.
Acknowledgments. Magdon-Ismail was partially supported by NSF:IIS 1124827 and by the Army
Research Laboratory under Cooperative Agreement W911NF-09-2-0053 (the ARL-NSCTA). The
views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research
Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute
reprints for Government purposes notwithstanding any copyright notation here on.
8
References
[1] M. Asteris, D. Papailiopoulos, and A. Dimakis. Non-negative sparse PCA with provable guarantees. In
Proc. ICML, 2014.
[2] M. Asteris, D. Papailiopoulos, and G. Karystinos. Sparse principal component of a rank-deficient matrix.
In Proc. ISIT, 2011.
[3] P. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from examples
without local minima. Neural Networks, 2:53?58, 1988.
[4] H. Bourlard and Y. Kamp. Auto-association by multilayer perceptrons and singular value decomposition.
Biological Cybernetics, 59:291?294, 1988.
[5] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column-based matrix reconstruction.
SIAM Journal on Computing, 43(2), 2014.
[6] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal components. Applied
Statistics, 22:203?214, 1995.
[7] G. Cottrell and P. Munro. Principal components analysis of images via back propagation. In Proc. SPIE
1001, Visual Communications and Image Processing ?88, 1988.
[8] A. d?Aspremont, F. Bach, and L. E. Ghaoui. Optimal solutions for sparse principal component analysis.
Journal of Machine Learning Research, 9:1269?1294, June 2008.
[9] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA
using semidefinite programming. SIAM Review, 49(3):434?448, 2007.
[10] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse principal
component analysis. The Journal of Machine Learning Research, 11:517?553, 2010.
[11] H. Kaiser. The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23(3):187?200,
1958.
[12] J. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1?27, 1964.
[13] L. W. Mackey. Deflation methods for sparse PCA. In Proc. NIPS, pages 1017?1024, 2009.
[14] M. Magdon-Ismail. Np-hardness and inapproximability of sparse pca. arXiv:1502.05675, 2015.
[15] A. Makhzani and B. Frey. k-sparse autoencoders. In ICLR, 2014.
[16] B. Moghaddam, Y. Weiss, and S. Avidan. Generalized spectral bounds for sparse LDA. In ICML, 2006.
[17] E. Oja. Data compression, feature extraction and autoassociation in feedforward neural networks. In
Artificial Neural Networks, volume 1, pages 737?745, 1991.
[18] E. Oja. Principal components, minor components and linear neural networks. Neural Networks, 5:927?
935, 1992.
[19] K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine,
2:559?572, 1901.
[20] J. Sammon. A nonlinear mapping for data structure analysis. IEEE Transactions on Computers, C18(5):401?409, 1969.
[21] H. Shen and J. Z. Huang. Sparse principal component analysis via regularized low rank matrix approximation. Journal of Multivariate Analysis, 99:1015?1034, July 2008.
[22] N. Trendafilov, I. T. Jolliffe, and M. Uddin. A modified principal component technique based on the lasso.
Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[23] X.-T. Yuan and T. Zhang. Truncated power method for sparse eigenvalue problems. The Journal of
Machine Learning Research, 14(1):899?925, 2013.
[24] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational &
Graphical Statistics, 15(2):265?286, 2006.
9
| 6252 |@word madelon:1 middle:1 version:3 polynomial:3 norm:4 stronger:1 loading:3 r13:1 underline:1 kbkf:1 open:1 d2:1 calculus:1 seems:1 nscta:1 sammon:1 decomposition:2 p0:1 q1:10 pick:1 tr:5 sepulchre:1 reduction:5 contains:1 dspca:1 ours:1 document:1 existing:4 err:3 ka:1 com:1 current:1 rpi:1 gmail:1 yet:1 must:1 cottrell:1 additive:1 informative:1 analytic:1 designed:1 interpretable:1 update:1 maxv:1 v:2 alone:1 greedy:2 mackey:1 plane:1 xk:13 ith:1 iterates:1 successive:1 simpler:1 zhang:1 constructed:2 direct:2 qualitative:1 prove:5 yuan:1 baldi:1 introduce:2 hardness:1 expected:1 rapid:1 blackbox:3 multi:1 ming:1 xhk:6 gpower:12 encouraging:1 little:1 cardinality:2 becomes:1 spain:1 project:1 notation:2 bounded:2 psychometrika:2 gisette:1 what:2 evolved:1 interpreted:1 dimakis:1 developed:1 proposing:1 guarantee:14 pseudo:1 every:6 multidimensional:1 runtime:1 classifier:1 k2:3 uk:3 demonstrates:1 unit:1 converse:1 omit:1 r500:2 positive:1 local:1 frey:1 encoding:2 ak:1 path:1 black:3 might:1 autoassociation:1 factorization:1 range:1 practical:2 acknowledgment:1 testing:1 practice:1 implement:2 asteris:2 empirical:2 significantly:1 projection:3 ups:1 get:4 cannot:3 close:2 selection:6 operator:2 onto:2 arcene:1 context:1 kb1:1 restriction:1 equivalent:3 deterministic:4 optimize:1 compression:1 maximizing:6 elusive:1 convex:1 shen:1 simplicity:1 orthonormal:2 financial:1 classic:1 handle:1 proving:1 coordinate:1 papailiopoulos:2 resp:2 enhanced:1 target:3 suppose:2 decode:1 ndr:5 construction:2 programming:1 us:5 magazine:1 hypothesis:1 agreement:1 lanckriet:1 element:1 satisfying:1 predicts:1 cooperative:1 bottom:1 role:1 eir:6 solved:1 capture:2 worst:3 cy:1 eckart:1 ordering:1 trade:1 decrease:1 richt:1 nesterov:1 motivate:1 depend:1 algebra:2 deliver:1 ei2:2 basis:2 drineas:1 k0:2 various:1 distinct:1 artificial:1 lift:1 pearson:1 choosing:1 refined:1 exhaustive:2 lymphoma:4 whose:5 encoded:1 kai:1 solve:1 heuristic:1 say:2 reconstruct:1 larger:1 encoder:80 ability:1 supplementary:2 statistic:3 gp:2 transform:1 associative:1 rr:4 eigenvalue:1 reconstruction:12 j2:5 achieve:2 ismail:4 frobenius:2 qr:2 empty:1 r1:9 produce:5 leave:1 develop:1 ij:2 minor:1 solves:1 implemented:1 c:1 arl:1 come:2 correct:1 kb:1 material:1 require:1 government:3 f1:2 generalization:1 suffices:1 isit:1 biological:2 elementary:2 summation:1 tighter:1 hold:3 mapping:2 claim:1 reserve:1 kruskal:1 achieves:5 early:2 xk2:1 purpose:1 proc:4 combinatorial:1 tool:2 minimization:3 arik:1 modified:1 boosted:1 encode:1 xit:1 focus:1 june:1 vk:7 rank:25 hk:10 colon:4 el:1 typically:2 journ:1 reproduce:1 i1:1 provably:2 fidelity:1 among:2 augment:1 constrained:3 art:2 equal:2 construct:15 once:1 extraction:1 sampling:5 icml:2 uddin:1 promote:1 mimic:1 future:1 np:2 report:1 primarily:2 few:3 oja:2 preserve:2 simultaneously:1 attempt:1 fd:2 semidefinite:2 copyright:1 accurate:1 moghaddam:1 encourage:1 modest:1 intense:1 orthogonal:1 euclidean:2 desired:2 sacrificing:1 theoretical:6 instance:1 column:33 kxkf:1 tp:1 goodness:1 w911nf:1 maximization:6 subset:4 too:1 optimally:1 encoders:22 answer:1 adaptively:1 randomized:2 siam:2 off:1 squared:1 opposed:3 huang:1 slowly:2 fir:1 dr:1 worse:1 cssp:8 return:3 hopt:1 converted:1 distribute:1 depends:1 view:4 lot:1 h1:19 analyze:2 im0:1 contribution:3 formed:1 ir:1 accuracy:2 became:2 variance:27 identify:1 kamp:1 famous:1 iterated:1 produced:1 multiplying:1 notoriously:1 asset:1 cc:1 cybernetics:1 explain:1 decorrelated:1 definition:2 against:1 boutsidis:3 e2:1 proof:6 spie:1 sampled:1 concentrating:1 popular:2 dimensionality:1 nonmetric:1 back:2 ta:1 improved:1 wei:1 formulation:3 box:3 though:3 just:2 correlation:1 autoencoders:1 sketch:2 hand:1 ei:2 nonlinear:2 propagation:1 mode:1 lar:1 lda:1 grows:1 normalized:2 true:2 equality:1 hence:1 regularization:2 symmetric:19 iteratively:4 nonzero:1 laboratory:2 i2:1 d4:1 criterion:2 generalized:5 prominent:1 demonstrate:2 l1:2 image:2 harmonic:1 tpower:6 common:1 rotation:2 specialized:1 empirically:3 volume:1 association:1 interpretation:1 approximates:1 interpret:1 rd:14 add:1 closest:1 multivariate:1 perspective:1 optimizing:1 reverse:1 postponed:1 preserving:3 minimum:6 additional:2 relaxed:1 ndk:3 recognized:1 algebraically:1 july:1 ii:4 branch:1 full:2 desirable:1 rj:2 technical:1 cross:1 bach:1 post:1 e1:1 a1:1 qi:7 avidan:1 multilayer:1 metric:1 expectation:3 arxiv:1 iteration:1 preserved:1 whereas:1 want:1 addition:1 separately:1 singular:9 rest:2 operate:1 comment:2 subject:1 deficient:1 tough:1 jordan:1 integer:1 call:2 ee:1 near:2 cadima:1 spca:1 feedforward:1 iterate:1 xj:2 simi:1 gave:2 fit:2 hastie:1 lasso:1 simplifies:1 idea:1 tradeoff:1 qj:2 bottleneck:1 whether:1 pca:45 munro:1 york:1 repeatedly:1 useful:3 detailed:2 se:1 simplest:1 exist:2 nsf:1 notice:1 per:1 tibshirani:1 iter:1 d3:1 pj:5 backward:2 v1:3 asymptotically:2 run:5 inverse:1 you:2 tailor:1 place:2 almost:1 scaling:2 comparable:1 capturing:1 bound:20 pay:1 datum:1 distinguish:1 fold:1 bv:1 constraint:5 tvk:1 ri:3 x2:3 u1:2 xvk:2 argument:1 min:1 optimality:2 span:4 speed:2 conjecture:1 smaller:1 slightly:1 reconstructing:1 explained:13 ghaoui:2 remains:2 discus:1 r3:1 deflation:2 hh:2 jolliffe:2 tractable:1 fi1:1 lieu:1 available:1 magdon:5 operation:1 apply:1 polytechnic:1 v2:1 spectral:3 away:1 generic:1 alternative:1 batch:22 xkf:12 original:6 top:9 running:6 graphical:2 maintaining:1 implied:1 malik:1 objective:4 question:1 quantity:1 kaiser:1 makhzani:1 kak2:1 traditional:5 diagonal:2 nr:1 iclr:1 subspace:2 decoder:5 ei1:6 considers:2 reason:1 induction:3 provable:1 xvv:1 index:1 providing:1 minimizing:7 demonstration:1 portant:1 troy:1 negative:1 implementation:1 policy:1 perform:2 upper:1 av:1 vkt:1 varimax:2 truncated:2 bhg:1 communication:1 rn:19 kvk0:1 perturbation:1 arbitrary:1 inferred:1 pair:2 required:1 specified:1 optimized:1 philosophical:1 barcelona:1 nip:2 able:1 suggested:1 fi2:1 below:2 c18:1 sparsity:47 program:1 interpretability:1 power:11 regularized:2 bourlard:1 residual:5 representing:1 historically:2 inversely:1 axis:1 reprint:1 negativity:1 aspremont:2 auto:23 prior:1 literature:2 review:1 kf:41 relative:5 loss:46 kakf:2 var:5 validation:1 penalization:1 h2:12 dr2:1 sufficient:1 thresholding:1 dq:1 pi:2 row:13 summary:1 supported:1 last:1 weaker:1 institute:1 taking:1 correspondingly:1 sparse:99 dimension:7 xn:1 forward:2 author:1 simplified:2 spam:1 historical:1 transaction:1 reconstructed:2 approximate:1 compact:1 gene:1 b1:1 unnecessary:2 xi:1 rensselaer:1 iterative:29 decomposes:1 why:1 table:1 additionally:1 symmetry:1 hornik:1 zou:1 constructing:1 official:1 main:2 dense:2 bounding:1 x1:13 sparsifier:1 ny:2 christos:2 deterministically:1 xh:20 khi:1 kxk2:1 young:1 xh1:2 rk:13 theorem:25 specific:1 showing:1 r2:6 dk:5 svm:2 beaten:1 decay:1 essential:1 exists:2 intrinsic:1 importance:1 notwithstanding:1 kx:54 sparser:2 authorized:1 surprise:1 army:2 visual:1 pythagoras:2 kxk:3 contained:1 expressed:1 partially:1 inapproximability:1 applies:3 aa:2 trendafilov:1 satisfies:2 relies:1 goal:3 identity:2 eout:6 considerable:1 hard:4 typical:1 principal:17 lemma:11 total:2 svd:4 perceptrons:1 formally:1 select:1 rotate:1 bkf:1 correlated:1 |
5,806 | 6,253 | Blazing the trails before beating the path:
Sample-efficient Monte-Carlo planning
Jean-Bastien Grill
Michal Valko
SequeL team, INRIA Lille - Nord Europe, France
[email protected]
[email protected]
R?mi Munos
Google DeepMind, UK?
[email protected]
Abstract
You are a robot and you live in a Markov decision process (MDP) with a finite or an
infinite number of transitions from state-action to next states. You got brains and so
you plan before you act. Luckily, your roboparents equipped you with a generative
model to do some Monte-Carlo planning. The world is waiting for you and you
have no time to waste. You want your planning to be efficient. Sample-efficient.
Indeed, you want to exploit the possible structure of the MDP by exploring only a
subset of states reachable by following near-optimal policies. You want guarantees
on sample complexity that depend on a measure of the quantity of near-optimal
states. You want something, that is an extension of Monte-Carlo sampling (for
estimating an expectation) to problems that alternate maximization (over actions)
and expectation (over next states). But you do not want to StOP with exponential
running time, you want something simple to implement and computationally
efficient. You want it all and you want it now. You want TrailBlazer.
1
Introduction
We consider the problem of sampling-based planning in a Markov decision process (MDP) when a
generative model (oracle) is available. This approach, also called Monte-Carlo planning or MonteCarlo tree search (see e.g., [12]), has been popularized in the game of computer Go [7, 8, 15] and
shown impressive performance in many other high dimensional control and game problems [4]. In
the present paper, we provide a sample complexity analysis of a new algorithm called TrailBlazer.
Our assumption about the MDP is that we possess a generative model which can be called from any
state-action pair to generate rewards and transition samples. Since making a call to this generative
model has a cost, be it a numerical cost expressed in CPU time (in simulated environments) or a
financial cost (in real domains), our goal is to use this model as parsimoniously as possible.
Following dynamic programming [2], planning can be reduced to an approximation ofhthe (optimal)
i
P
t
value function, defined as the maximum of the expected sum of discounted rewards: E
t?0 ? rt ,
where ? ? [0, 1) is a known discount factor. Indeed, if an ?-optimal approximation of the value
function at any state-action pair is available, then the policy corresponding to selecting in each state
the action with the highest approximated value will be O (?/ (1 ? ?))-optimal [3].
Consequently, in this paper, we focus on a near-optimal approximation of the value function for
a single given state (or state-action pair). In order to assess the performance of our algorithm we
measure its sample complexity defined as the number of oracle calls, given that we guarantee its
consistency, i.e., that with probability at least 1 ? ?, TrailBlazer returns an ?-approximation of the
value function as required by the probably approximately correct (PAC) framework.
?
on the leave from SequeL team, INRIA Lille - Nord Europe, France
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We use a tree representation to represent the set of states that are reachable from any initial state.
This tree alternates maximum (MAX) nodes (corresponding to actions) and average (AVG) nodes
(corresponding to the random transition to next states). We assume the number K of actions is finite.
However, the number N of possible next states is either finite or infinite (which may be the case
when the state space is infinite), and we will report results in both the finite N and the infinite case.
The root node of this planning tree represents the current state (or a state-action) of the MDP and its
value is the maximum (over all policies defined at MAX nodes) of the corresponding expected sum of
discounted rewards. Notice that by using a tree representation, we do not use the property that some
state of the MDP can be reached by different paths (sequences of states-actions). Therefore, this state
will be represented by different nodes in the tree. We could potentially merge such duplicates to form
a graph instead. However, for simplicity, we choose not to merge these duplicates and keep a tree,
which could make the planning problem harder. To sum up, our goal is to return, with probability
1 ? ?, an ?-accurate value of the root node of this planning tree while using as low number of calls
to the oracle as possible. Our contribution is an algorithm called TrailBlazer whose sampling
strategy depends on the specific structure of the MDP and for which we provide sample complexity
bounds in terms of a new problem-dependent measure of the quantity of near-optimal nodes. Before
describing our contribution in more detail we first relate our setting to what has been around.
1.1
Related work
In this section we focus on the dependency between ? and the sample complexity and all bound of
the style 1/?c are up to a poly-logarithmic multiplicative factor not indicated for clarity. Kocsis and
Szepesv?ri [12] introduced the UCT algorithm (upper-confidence bounds for trees). UCT is efficient
in computer Go [7, 8, 15] and a number of other control and game problems [4]. UCT is based on
generating trajectories by selecting in each MAX node the action that has the highest upper-confidence
bound (computed according to the UCB algorithm of Auer et al. [1]). UCT converges asymptotically to
the optimal solution, but its sample complexity can be worst than doubly-exponential in (1/?) for
some MDPs [13]. One reason for this is that the algorithm can expand very deeply the apparently
best branches but may lack sufficient exploration, especially when a narrow optimal path is hidden in
a suboptimal branch. As a result, this approach works well in some problems with a specific structure
but may be much worse than a uniform sampling in other problems.
On the other hand, a uniform planning approach is safe for all problems. Kearns et al. [11] generate a
sparse look-ahead tree based on expanding all MAX nodes and sampling a finite number of children
from AVG nodes up to a fixed depth that depends on the desired accuracy ?. Their sample complexity
is2 of the order of (1/?)log(1/?) , which is non-polynomial in 1/?. This bound is better than that for
UCT in a worst-case sense. However, as their look-ahead tree is built in a uniform and non-adaptive
way, this algorithm fails to benefit from a potentially favorable structure of the MDP.
An improved version of this sparse-sampling algorithm by Walsh et al. [17] cuts suboptimal branches
in an adaptive way but unfortunately does not come with an improved bound and stays non-polynomial
even in the simple Monte Carlo setting for which K = 1.
Although the sample complexity is certainly non-polynomial in the worst case, it can be polynomial in some specific problems. First, for the case of finite N , the sample complexity is polynomial and Sz?r?nyi et al. [16] show that a uniform sampling algorithm has complexity at most
(1/?)2+log(KN )/(log(1/?)) . Notice that the product KN represents the branching factor of the lookahead planning tree. This bound could be improved for problems with specific reward structure or
transition smoothness. In order to do this, we need to design non-uniform, adaptive algorithm that
captures the possible structure of the MDP when available, while making sure that in the worst case,
we do not perform worse than a uniform sampling algorithm.
The case of deterministic dynamics (N = 1) and rewards considered by Hren and Munos [10] has a
complexity of order (1/?)(log ?)/(log(1/?)) , where ? ? [1, K] is the branching factor of the subset of
near-optimal nodes.3 The case of stochastic rewards has been considered by Bubeck and Munos [5]
but with the difference that the goal was not to approximate the optimal value function but the value
of the best open-loop policy which consists in a sequence of actions independent of states. Their
sample complexity is (1/?)max(2,(log ?)/(log 1/?)) .
2
3
neglecting exponential dependence in ?
nodes that need to be considered in order to return a near-optimal approximation of the value at the root
2
In the case of general MDPs, Bu?soniu and Munos [6] consider the case of a fully known model of
the MDP. For any state-action, the model returns the expected reward and the set of all next states
(assuming N is finite) with their corresponding transition probabilities. In that case, the complexity is
(1/?)log ?/(log(1/?)) , where ? ? [0, KN ] can again be interpreted as a branching factor of the subset
of near-optimal nodes. These approaches use the optimism in the face of uncertainty principle whose
applications to planning have been have been studied by Munos [13]. TrailBlazer is different. It
is not optimistic by design: To avoid voracious demand for samples it does not balance the upperconfidence bounds of all possible actions. This is crucial for polynomial sample complexity in the
infinite case. The whole Section 3 shines many rays of intuitive light on this single and powerful idea.
The work that is most related to ours is StOP by Sz?r?nyi et al. [16] which considers the planning problem in MDPs with a generative model. Their complexity bound is of the order of
(1/?)2+log ?/(log(1/?))+o(1) , where ? ? [0, KN ] is a problem-dependent quantity. However, their ?
defined as lim??0 max(?1 , ?2 ) (in their Theorem 2) is somehow difficult to interpret as a measure of
the quantity of near-optimal nodes. Moreover, StOP is not computationally efficient as it requires to
identify the optimistic policy which requires computing an upper bound on the value of any possible
policy, whose number is exponential in the number of MAX nodes, which itself is exponential in the
planning horizon. Although they suggest (in their Appendix F) a computational improvement, this
version is not analyzed. Finally, unlike in the present paper, StOP does not consider the case N = ?
of an unbounded number of states.
1.2
Our contributions
Our main result is TrailBlazer, an algorithm with a bound on the number of samples required to
return a high-probability ?-approximation of the root node whether the number of next states N is
finite or infinite. The bounds use a problem-dependent quantity (? or d) that measures the quantity of
near-optimal nodes. We now summarize the results.
Finite number of next states (N < ?): The sample complexity of TrailBlazer is of the order
of4 (1/?)max(2,log(N ?)/ log(1/?)+o(1)) , where ? ? [1, K] is related to the branching factor of the set
of near-optimal nodes (precisely defined later).
Infinite number of next states (N = ?): The complexity of TrailBlazer is (1/?)2+d , where d
is a measure of the difficulty to identify the near-optimal nodes. Notice that d can be finite even if
the planning problem is very challenging.5 We also state our contributions in specific settings in
comparison to previous work.
? For the case N < ?, we improve over the best-known previous worst-case bound with
an exponent (to 1/?) of max(2, log(N K)/ log(1/?)) instead of 2 + log(N K)/ log(1/?)
reported by Sz?r?nyi et al. [16].
? For the case N = ?, we identify properties of the MDP (when d = 0) under which the
sample complexity is of order (in 1/?2 ). This is the case when there are non-vanishing actiongaps6 from any state along near-optimal policies or when the probability of transitionning to
nodes with gap ? is upper bounded by ?2 . This complexity bound is as good as MonteCarlo sampling and for this reason TrailBlazer is a natural extension of Monte-Carlo
sampling (where all nodes are AVG) to stochastic control problems (where MAX and AVG
nodes alternate). Also, no previous algorithm reported a polynomial bound when N = ?.
? In MDPs with deterministic transitions (N = 1) but stochastic rewards our bound is
(1/?)max(2,log ?/(log 1/?)) which is similar to the bound achieved by Bubeck and Munos [5]
in a similar setting (open-loop policies).
? In the evaluation case without control (K = 1) TrailBlazer behaves exactly as MonteCarlo sampling (thus achieves a complexity of 1/?2 ), even in the case N = ?.
? Finally TrailBlazer is easy to implement and is numerically efficient.
4
neglecting logarithmic terms in ? and ?
since when N = ? the actual branching factor of the set of reachable nodes is infinite
6
defined as the difference in values of best and second-best actions
5
3
2
Monte-Carlo planning with a generative model
Setup We operate on a planning tree T . Each node
1: Input: ?, ?
of T from the root down is alternatively either an
2: Set: ? ? ? 1/ max(2,log(1/?))
log(K)
average (AVG) or a maximum (MAX) node. For any
2 log (1??)
3:
Set:
?
?
2
log(?(1
?
?))
log(?/?)
node s, C [s] is the set of its children. We consider
4: Set: m ? (log(1/?) + ?)/((1 ? ?)2 ?2 )
trees T for which the cardinality of C [s] for any MAX
5: Use: ? and ? as global parameters
node s is bounded by K. The cardinality N of C [s]
6: Output:
for any AVG node s can be either finite, N < ?,
? ? call the root with parameters (m, ?/2)
or infinite. We consider both cases. TrailBlazer
applies to both situations. We provide performance
Figure 1: TrailBlazer
guarantees for a general case and possibly tighter,
N -dependent guarantees in the case of N < ?. We assume that we have a generative model of the
transitions and rewards: Each AVG node s is associated with a transition, a random variable ?s ? C [s]
and a reward, a random variable rs ? [0, 1].
Objective For any node s, we define the value function V [s] as the optimum over policies ? (giving a
successor to all MAX nodes) of the sum of discounted
expected rewards playing policy ?,
"
#
X
t
V [s] = sup E
? rst s0 = s, ? ,
?
t?0
where ? ? (0, 1) is the discount factor. If s is an AVG
node, V satisfies the following Bellman equation,
X
V [s] = E [rs ] + ?
p(s0 |s)V [s0 ] .
s0 ?C[s]
If s is a MAX node, then V [s] = maxs0 ?C[s] V [s0 ] .
The planner has access to the oracle which can be
called for any AVG node s to either get a reward r or a
transition ? which are two independent random variables identically distributed as rs and ?s respectively.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
Input: m, ?
Initialization: {Only executed on first call}
SampledNodes ? ?,
r?0
Run:
if ? ? 1/(1 ? ?) then
Output: 0
end if
if |SampledNodes| > m then
ActiveNodes ? SampledNodes(1 : m)
else
while |SampledNodes| < m do
? ? {new sample of next state}
SampledNodes.append(?)
r ? r+[new sample of reward]
end while
ActiveNodes ? SampledNodes
end if {At this point, |ActiveNodes| = m}
for all unique nodes s ? ActiveNodes do
k ? #occurrences of s in ActiveNodes
? ? call s with parameters (k, ?/?)
? ? ? + ?k/m
end for
Output: ?? + r/|SampledNodes|
With the notation above, our goal is to estimate the
value V [s0 ] of the root node s0 using the smallest
possible number of oracle calls. More precisely,
Figure 2: AVG node
given any ? and ?, we want to output a value ??,? such
that P [|??,? ? V [s0 ]| > ?] ? ? using the smallest
possible number of oracle calls n?,? . The number of calls is the sample complexity of the algorithm.
2.1
Blazing the trails with TrailBlazer
To fulfill the above objective, our TrailBlazer constructs a planning tree T which is, at any
time, a finite subset of the potentially infinite tree. Only the already visited nodes are in T and
explicitly represented in memory. Taking the object-oriented paradigm, each node of T is a persistent
object with its own memory which can receive and perform calls respectively from and to other
nodes. A node can potentially be called several times (with different parameters) during the run of
TrailBlazer and may reuse (some of) its stored (transition and reward) samples. In particular, after
node s receives a call from its parent, node s may perform internal computation by calling its own
children in order to return a real value to its parent.
Pseudocode of TrailBlazer is in Figure 1 along with the subroutines for MAX nodes in Figure 3 and
AVG nodes in Figure 2. A node (MAX or AVG) is called with two parameters m and ?, which represent
some requested properties of the returned value: m controls the desired variance and ? the desired
maximum bias. We now describe the MAX and AVG node subroutines.
4
MAX nodes A MAX node s keeps a lower and an
upper bound of its children values which with high
probability simultaneously hold at all times. It sequentially calls its children with different parameters in order to get more and more precise estimates
of their values. Whenever the upper bound of one
child becomes lower than the maximum lower bound,
this child is discarded. This process can stop in two
ways: 1) The set L of the remaining children shrunk
enough such that there is a single child b? left. In
this case, s calls b? with the same parameters that s
received and uses the output of b? as its own output.
2) The precision we have on the value of the remaining children is high enough. In this case, s returns
the highest estimate of the children in L. Note that
the MAX node is eliminating actions to identify the
best. Any other best-arm identification algorithm for
bandits can be adapted instead.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
Input: m, ?
L ? all children of the node
`?1
while |L| >q
1 and U ? (1 ? ?)? do
log(K`/(??))+?/(???)+?+1
2
U ? 1??
`
for b ? L do
?b ? call b with (`, U?/(1 ? ?))
end for
n
h
io
2U
2U
L ? b : ?b + 1??
? supj ?j ? 1??
`?`+1
end while
if |L| > 1 then
Output: ? ? maxb?L ?b
else { L = {b? } }
b? ? arg maxb?L ?b
? ? call b? with (m, ??)
Output: ?
end if
Figure 3: MAX node
AVG nodes Every AVG node s keeps a list of all the
children that it already sampled and a reward estimate r ? R. Note that the list may contain the same
child multiple times (this is particularly true for N < ?). After receiving a call with parameters
(m, ?), s checks if ? ? 1/(1 ? ?). If this condition is verified, then it returns zero. If not, s considers
the first m sampled children and potentially samples more children from the generative model if
needed. For every child s0 in this list, s calls it with parameters (k, ?/?), where k is the number of
times a transition toward this child was sampled. It returns r + ??, where ? is the average of all the
children estimates.
Anytime algorithm TrailBlazer is naturally anytime. It can be called with slowly decreasing ?,
such that m is always increased only by 1, without having to throw away any previously collected
samples. Executing TrailBlazer with ?0 and then with ? < ?0 leads to the same amount of
computation as immediately running TrailBlazer with ?.
Practical considerations The parameter ? exists so the behavior depends only on the randomness
of oracle calls and the parameters (m, ?) that the node has been called with. This is a desirable
property because it opens the possibility to extend the algorithm to more general settings, for instance
if we have also MIN nodes. However, for practical purposes, we may set ? = 0 and modify the
definition of U in Figure 3 by replacing K with the number of oracle calls made so far globally.
3
Cogs whirring behind
Before diving into the analysis we explain the ideas behind TrailBlazer and the choices made.
Tree-based algorithm The number of policies the planner can consider is exponential in the
number of states. This leads to two major challenges. First, reducing the problem to multi-arm
bandits on the set of the policies would hurt. When a reward is collected from a state, all the policies
which could reach that state are affected. Therefore, it is useful to share the information between the
policies. The second challenge is computational as it is infeasible to keep all policies in memory.
These two problems immediately vanish with just how TrailBlazer is formulated. Contrary to
Sz?r?nyi et al. [16], we do not represent the policies explicitly or update them simultaneously to
share the information, but we store all the information directly in the planning tree we construct.
Indeed, by having all the nodes being separate entities that store their own information, we can share
information between policies without explicitly having to enforce it.
We steel ourselves for the detailed understanding with the following two arguments. They shed light
from two different angles on the very same key point: Do not refine more paths than you need to!
5
Delicate treatment of uncertainty First, we give intuition about the two parameters which measure the requested precision of a call. The output estimate ? of any call with parameters (m, ?)
verifies the following property (conditioned on a high-probability event),
h
i
? 2 ?2
?? E e?(??V[s]) ? exp ? + ?|?| +
, with ? 2 = O (1/m) and constant ?.
(1)
2
This awfully looks like the definition of ? being uncentered sub-Gaussian, except that instead of ? in
the exponential function, there is |?| and there is a ?-independent constant ?. Inequality 1 implies
that the absolute value of the bias of the output estimate ? is bounded by ?,
E [?] ? V [s] ? ?.
As in the sub-Gaussian case, the second term 12 ? 2 ?2 is a variance term. Therefore, ? controls the
maximum bias of ? and 1/m control its sub-variance. In some cases, getting high-variance or
low-variance estimate matters less as it is going to be averaged later with other independent estimates
by an ancestor AVG node. In this case we prefer to query for high variance rather than a low one, in
order to decrease sample complexity.
From ? and ? it is possible to deduce a confidence bounds on |? ? V?[s]| by typically summing
the bias ? and a term proportional to the standard deviation ? = O (1/ m). Previous approaches
[16, 5] consider a single parameter, representing the width of this high-probability confidence interval.
TrailBlazer is different. In TrailBlazer, the nodes can perform high-variance and low-bias
queries but can also query for both low-variance and low-bias. TrailBlazer treats these two types
of queries differently. This is the whetstone of TrailBlazer and the reason why it is not optimistic.
Refining few paths In this part we explain the condition |SampledNodes| > m in Figure 2, which
is crucial for our approach and results. First notice, that as long as TrailBlazer encounters only AVG
nodes, it behaves just like Monte-Carlo sampling ? without the MAX nodes we would be just doing
a simple averaging of trajectories. However, when TrailBlazer encounters a MAX node it locally
uses more samples around this MAX node, temporally moving away from a Monte-Carlo behavior.
This enables TrailBlazer to compute the best action at this MAX node. Nevertheless, once this
best action is identified with high probability, the algorithm should behave again like Monte-Carlo
sampling. Therefore, TrailBlazer forgets the additional nodes, sampled just because of the MAX
node, and only keeps in memory the first m ones. This is done with the following line in Figure 2,
ActiveNodes ? SampledNodes(1 : m).
Again, while additional transitions were useful for some MAX node parents to decide which action
to pick, they are discarded once this choice is made. Note that they can become useful again if an
ancestor becomes unsure about which action to pick and needs more precision to make a choice. This
is an important difference between TrailBlazer and some previous approaches like UCT where all
the already sampled transitions are equally refined. This treatment enables us to provide polynomial
bounds on the sample complexity for some special cases even in the infinite case (N = ?).
4
TrailBlazer is good and cheap ? consistency and sample complexity
In this section, we start by our consistency result, stating that TrailBlazer outputs a correct value
in a PAC (probably approximately correct) sense. Later, we define a measure of the problem difficulty
which we use to state our sample-complexity results. We remark that the following consistency result
holds whether the state space is finite or infinite.
Theorem 1. For all ? and ?, the output ??,? of TrailBlazer called on the root s0 with (?, ?) verifies
P [|??,? ? V [s0 ]| > ?] < ?.
4.1
Definition of the problem difficulty
We now define a measure of problem difficulty that we use to provide our sample complexity
guarantees. We define a set of near-optimal nodes such that exploring only this set is enough to
compute an optimal policy. Let s0 be a MAX node of tree T . For any of its descendants s, let
c?s (s0 ) ? C [s0 ] be the child of s0 in the path between s0 and s. For any MAX node s, we define
??s (s0 ) = max0 V [x] ? V [c?s (s0 )] .
x?C[s ]
6
??s (s0 ) is the difference of the sum of discounted rewards stating from s0 between an agent playing
optimally and one playing first the action toward s and then optimally.
Definition 1 (near-optimality). We say that a node s of depth h is near-optimal, if for any even
depth h0 ,
0
? (h?h )/2
??s (sh0 ) ? 16
?(1 ? ?)
with sh0 the ancestor of s of even depth h0 . Let Nh be the set of all near-optimal nodes of depth h.
Remark 1. Notice that the subset of near-optimal nodes contains all required information to get
the value of the root. In the case N = ?, when p(s|s0 ) = 0 for all s and s0 , then our definition of
near-optimality nodes leads to the smallest subset in a sense we precise in Appendix C. We prove that
with probability 1 ? ?, TrailBlazer only explores near-optimal nodes. Therefore, the size of the
subset of near-optimal nodes directly reflects the sample complexity of TrailBlazer.
In Appendix C, we discuss the negatives of other potential definitions of near-optimality.
4.2
Sample complexity in the finite case
We first state our result where the set of the AVG children nodes is finite and bounded by N .
Definition 2. We define ? ? [1, K] as the smallest number such that
?C ?h,
|N2h | ? CN h ?h .
Notice that since the total number of nodes of depth 2h is bounded by (KN )h , ? is upper-bounded
by K, the maximum number of MAX?s children. However ? can be as low as 1 in cases when the set
of near-optimal nodes is small.
Theorem 2. There exists C > 0 and K such that for all ? > 0 and ? > 0, with probability 1 ? ?,
the sample-complexity of TrailBlazer (the number of calls to the generative model before the
algorithm terminates) is
log(N ?)
n(?, ?) ? C(1/?)max(2, log(1/?) +o(1)) (log(1/?) + log(1/?)) ,
?
where ? = 5 when log(N ?)/ log(1/?) ? 2 and ? = 3 otherwise.
This provides a problem-dependent sample-complexity bound, which already in theworst case
e (1/?)2+log(KN )/ log(1/?) [16]. This
(? = K) improves over the best-known worst-case bound O
bound gets better as ? gets smaller and is minimal when ? = 1. This is, for example, the case when
the gap (see definition given in Equation 2) at MAX nodes is uniformly lower-bounded by some ? > 0.
In this case, this theorem provides a bound of order (1/?)max(2,log(N )/ log(1/?)) . However, we will
show in Remark 2 that we can further improve this bound to (1/?)2 .
4.3
Sample complexity in the infinite case
Since the previous bound depends on N , it does not apply to the infinite case with N = ?. We now
provide a sample complexity result in the case N = ?. However, notice that when N is bounded,
then both results apply.
We first define gap ?(s) for any MAX node s as the difference between the best and second best arm,
?(s) = V [i? ] ?
max
i?C[s],i6=i?
V [i]
with i? = arg max V [i] .
(2)
i?C[s]
For any even integer h, we define a random variable S h taking values among MAX nodes of depth h,
in the following way. First, from every AVG nodes from the root to nodes of depth h, we draw a single
transition to one of its children according to the corresponding transition probabilities. This defines
a subtree with K h/2 nodes of depth h and we choose S h to be one of them uniformly at random.
Furthermore, for any even integer h0 < h we note Shh0 the MAX node ancestor of S h of depth h0 .
7
Definition 3. We define d ? 0 as the smallest d such that for all ? there exists a > 0 for which for
all even h > 0,
?
?
0
? (h?h )/2
h
1 ?(Sh0 )<16 ?(1??) ?
Y
? h/2
?
h
? ? a? ?dh
E?
K
1
S
?
N
h
0
?
?
? h?h
0
0?h <h
h0 ?0(mod 2)
If no such d exists, we set d = ?.
This definition of d takes into account the size of the near-optimality set (just like ?) but unlike ? it
also takes into account the difficulty to identify the near-optimal paths.
Intuitively, the expected number of oracle calls performed by a given AVG node s is proportional to:
(1/?2 ) ? (the product of the inverted squared gaps of the set of MAX nodes in the path from the root to
s) ? (the probability of reaching s by following a policy which always tries to reach s).
Therefore, a near-optimal path with a larger number of small MAX node gaps can be considered
difficult. By assigning a larger weight to difficult nodes, we are able to give a better characterization
of the actual complexity of the problem and provide polynomial guarantees on the sample complexity
for N = ? when d is finite.
Theorem 3. If d is finite then there exists C > 0 such that for all ? > 0 and ? > 0, the expected
sample complexity of TrailBlazer satisfies
3
E [n(?, ?)] ? C
(log(1/?) + log(1/?))
?
?2+d
Note that this result holds in expectation only, contrary to Theorem 2 which holds in high probability.
We now give an example for which d = 0, followed by a special case of it.
Lemma 1. If there exists c > 0 and b > 2 such that for any near-optimal AVG node s,
P [? (?s ) ? x] ? cxb ,
where the random variable ?s is a successor state from s drawn from the MDP?s transition probabilities, then d = 0 and consequently the sample complexity is of order 1/?2 .
Remark 2. If there exists ?min such that for any near-optimal MAX node s, ?(s) ? ?min then
b
d = 0 and the sample complexity is of order 1/?2 . Indeed, in this case as P [?s ? x] ? (x/?min )
for any b > 2 for which d = 0 by Lemma 1.
5
Conclusion
We provide a new Monte-Carlo planning algorithm TrailBlazer that works for MDPs where the
number of next states N can be either finite or infinite. TrailBlazer is easy to implement and
is numerically efficient. It comes packaged with a PAC consistency and two problem-dependent
sample-complexity guarantees expressed in terms of a measure (defined by ?) of the quantity of
near-optimal nodes or a measure (defined by d) of the difficulty to identify the near-optimal paths.
The sample complexity of TrailBlazer improves over previous worst-case guarantees. What?s
more, TrailBlazer exploits MDPs with specific structure by exploring only a fraction of the whole
search space when either ? or d is small. In particular, we showed that if the set of near-optimal nodes
2
e
have non-vanishing action-gaps, then the sample complexity is O(1/?
), which is the same rate as
Monte-Carlo sampling. This is a pretty decent evidence that TrailBlazer is a natural extension of
Monte-Carlo sampling to stochastic control problems.
Acknowledgements The research presented in this paper was supported by French Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council, a doctoral grant of ?cole Normale Sup?rieure in Paris,
Inria and Carnegie Mellon University associated-team project EduBand, and French National Research Agency
projects ExTra-Learn (n.ANR-14-CE24-0010-01) and BoB (n.ANR-16-CE23-0003)
8
References
[1] Peter Auer, Nicol? Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine Learning, 47(2-3):235?256, 2002.
[2] Richard Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957.
[3] Dimitri Bertsekas and John Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
Belmont, MA, 1996.
[4] Cameron B. Browne, Edward Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling,
Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.
A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence
and AI in Games, 4(1):1?43, 2012.
[5] S?bastien Bubeck and R?mi Munos. Open-loop optimistic planning. In Conference on Learning
Theory, 2010.
[6] Lucian Bu?soniu and R?mi Munos. Optimistic planning for Markov decision processes. In
International Conference on Artificial Intelligence and Statistics, 2012.
[7] R?mi Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. Computers
and games, 4630:72?83, 2007.
[8] Sylvain Gelly, Wang Yizao, R?mi Munos, and Olivier Teytaud. Modification of UCT with
patterns in Monte-Carlo Go. Technical report, Inria, 2006. URL https://hal.inria.fr/
inria-00117266.
[9] Arthur Guez, David Silver, and Peter Dayan. Efficient Bayes-adaptive reinforcement learning
using sample-based search. Neural Information Processing Systems, 2012.
[10] Jean-Francois Hren and R?mi Munos. Optimistic Planning of Deterministic Systems. In
European Workshop on Reinforcement Learning, 2008.
[11] Michael Kearns, Yishay Mansour, and Andrew Y. Ng. A sparse sampling algorithm for nearoptimal planning in large Markov decision processes. In International Conference on Artificial
Intelligence and Statistics, 1999.
[12] Levente Kocsis and Csaba Szepesv?ri. Bandit-based Monte-Carlo planning. In European
Conference on Machine Learning, 2006.
[13] R?mi Munos. From bandits to Monte-Carlo tree search: The optimistic principle applied to
optimization and planning. Foundations and Trends in Machine Learning, 7(1):1?130, 2014.
[14] David Silver and Joel Veness. Monte-Carlo planning in large POMDPs. In Neural Information
Processing Systems, 2010.
[15] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander
Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the
game of Go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
[16] Bal?zs Sz?r?nyi, Gunnar Kedenburg, and R?mi Munos. Optimistic planning in Markov decision
processes using a generative model. In Neural Information Processing Systems, 2014.
[17] Thomas J Walsh, Sergiu Goschin, and Michael L Littman. Integrating sample-based planning
and model-based reinforcement learning. AAAI Conference on Artificial Intelligence, 2010.
9
| 6253 |@word version:2 eliminating:1 polynomial:9 open:4 r:3 pick:2 harder:1 initial:1 contains:1 selecting:2 daniel:1 ours:1 current:1 com:1 michal:2 assigning:1 guez:2 john:2 belmont:1 numerical:1 enables:2 cheap:1 update:1 generative:10 intelligence:4 vanishing:2 aja:1 provides:2 characterization:1 node:94 philipp:1 teytaud:1 unbounded:1 along:2 become:1 persistent:1 descendant:1 consists:1 doubly:1 prove:1 ray:1 expected:6 indeed:4 behavior:2 planning:29 multi:1 brain:1 bellman:2 nham:1 discounted:4 decreasing:1 globally:1 kedenburg:1 cpu:1 actual:2 equipped:1 cardinality:2 becomes:2 spain:1 estimating:1 moreover:1 bounded:8 notation:1 project:2 what:2 interpreted:1 deepmind:1 z:1 csaba:1 nj:1 guarantee:8 every:3 act:1 shed:1 exactly:1 uk:1 control:8 grant:1 bertsekas:1 before:5 modify:1 treat:1 io:1 laurent:1 path:10 approximately:2 merge:2 inria:8 initialization:1 studied:1 doctoral:1 challenging:1 walsh:2 averaged:1 unique:1 practical:2 implement:3 soniu:2 demis:1 ce24:1 got:1 confidence:4 lucian:1 integrating:1 suggest:1 get:5 operator:1 live:1 deterministic:3 go:4 survey:1 simplicity:1 immediately:2 financial:1 hurt:1 diego:1 yishay:1 programming:3 olivier:1 trail:2 us:2 pa:1 trend:1 approximated:1 particularly:1 cut:1 wang:1 capture:1 worst:8 decrease:1 highest:3 deeply:1 intuition:1 environment:1 agency:1 complexity:40 reward:17 littman:1 dynamic:4 depend:1 differently:1 represented:2 describe:1 monte:19 query:4 artificial:3 refined:1 h0:5 kalchbrenner:1 jean:3 whose:3 larger:2 say:1 otherwise:1 anr:2 statistic:2 fischer:1 itself:1 kocsis:2 sequence:2 product:2 fr:3 shine:1 loop:3 lookahead:1 intuitive:1 getting:1 rst:1 parent:3 sutskever:1 optimum:1 francois:1 generating:1 silver:3 converges:1 leave:1 object:2 executing:1 andrew:1 stating:2 received:1 edward:1 throw:1 come:2 implies:1 safe:1 correct:3 stochastic:4 luckily:1 exploration:1 shrunk:1 successor:2 education:1 tighter:1 exploring:3 extension:3 hold:4 around:2 considered:4 exp:1 dieleman:1 major:1 achieves:1 tavener:1 smallest:5 purpose:1 favorable:1 yizao:1 visited:1 calais:1 council:1 cole:1 reflects:1 always:2 gaussian:2 spyridon:1 fulfill:1 rather:1 avoid:1 reaching:1 normale:1 goschin:1 focus:2 refining:1 improvement:1 check:1 sense:3 dependent:6 dayan:1 typically:1 hidden:1 bandit:5 ancestor:4 expand:1 going:1 france:2 subroutine:2 arg:2 among:1 exponent:1 lucas:1 plan:1 special:2 construct:2 once:2 having:3 ng:1 sampling:16 koray:1 veness:1 represents:2 lille:2 look:3 hren:2 report:2 duplicate:2 few:1 richard:1 oriented:1 simultaneously:2 national:1 powley:1 parsimoniously:1 ourselves:1 delicate:1 possibility:1 evaluation:1 certainly:1 joel:1 analyzed:1 light:2 behind:2 perez:1 accurate:1 neglecting:2 arthur:2 tree:23 desired:3 minimal:1 increased:1 instance:1 maximization:1 cost:3 deviation:1 subset:7 uniform:6 optimally:2 reported:2 stored:1 dependency:1 kn:6 nearoptimal:1 explores:1 international:2 stay:1 sequel:2 bu:2 receiving:1 michael:2 ilya:1 again:4 squared:1 cesa:1 aaai:1 choose:2 possibly:1 slowly:1 huang:1 worse:2 style:1 return:9 dimitri:1 account:2 potential:1 de:1 ioannis:1 waste:1 matter:1 explicitly:3 depends:4 multiplicative:1 try:1 root:11 later:3 optimistic:8 apparently:1 sup:2 reached:1 start:1 bayes:1 doing:1 simon:2 contribution:4 ass:1 accuracy:1 variance:8 identify:6 identification:1 kavukcuoglu:1 carlo:19 trajectory:2 pomdps:1 bob:1 randomness:1 explain:2 reach:2 whenever:1 definition:10 naturally:1 associated:2 mi:8 stop:5 sampled:5 treatment:2 lim:1 anytime:2 improves:2 graepel:1 auer:2 higher:1 improved:3 done:1 furthermore:1 just:5 uct:7 hand:1 receives:1 replacing:1 lack:1 google:2 somehow:1 french:2 defines:1 indicated:1 scientific:1 mdp:12 hal:1 thore:1 lillicrap:1 contain:1 true:1 whitehouse:1 game:6 branching:5 during:1 width:1 bal:1 consideration:1 behaves:2 pseudocode:1 performed:1 packaged:1 nh:1 extend:1 interpret:1 numerically:2 mellon:1 multiarmed:1 ai:1 smoothness:1 consistency:5 i6:1 reachable:3 moving:1 robot:1 europe:2 impressive:1 access:1 deduce:1 something:2 own:4 showed:1 diving:1 rieure:1 store:2 selectivity:1 inequality:1 cxb:1 leach:1 inverted:1 ministry:1 additional:2 george:1 paradigm:1 stephen:1 branch:3 multiple:1 desirable:1 technical:1 long:1 equally:1 cameron:1 neuro:1 expectation:3 represent:3 achieved:1 receive:1 szepesv:2 want:10 interval:1 else:2 crucial:2 extra:1 operate:1 unlike:2 posse:1 regional:1 probably:2 sure:1 contrary:2 mod:1 call:23 integer:2 near:30 easy:2 identically:1 enough:3 maxb:2 decent:1 browne:1 sander:1 identified:1 suboptimal:2 idea:2 cn:1 ce23:1 grill:2 whether:2 veda:1 optimism:1 url:1 reuse:1 peter:3 returned:1 action:22 remark:4 deep:1 sh0:3 useful:3 detailed:1 amount:1 discount:2 locally:1 reduced:1 generate:2 http:1 notice:7 of4:1 carnegie:1 waiting:1 affected:1 key:1 gunnar:1 nevertheless:1 drawn:1 clarity:1 levente:1 verified:1 nal:1 graph:1 asymptotically:1 fraction:1 sum:5 run:2 angle:1 you:18 uncertainty:2 powerful:1 planner:2 decide:1 draw:1 decision:5 appendix:3 prefer:1 cowling:1 lanctot:1 sergiu:1 bound:28 followed:1 refine:1 oracle:9 adapted:1 ahead:2 precisely:2 your:2 ri:2 calling:1 argument:1 min:4 optimality:4 according:2 alternate:3 popularized:1 unsure:1 terminates:1 smaller:1 mastering:1 making:2 modification:1 intuitively:1 den:1 computationally:2 equation:2 previously:1 describing:1 montecarlo:3 discus:1 needed:1 madeleine:1 supj:1 upperconfidence:1 end:7 antonoglou:1 rohlfshagen:1 available:3 panneershelvam:1 apply:2 away:2 enforce:1 occurrence:1 encounter:2 hassabis:1 thomas:1 running:2 remaining:2 exploit:2 giving:1 gelly:1 especially:1 nyi:5 objective:2 already:4 quantity:7 strategy:1 rt:1 dependence:1 separate:1 simulated:1 entity:1 athena:1 chris:1 maddison:1 considers:2 collected:2 samothrakis:1 reason:3 toward:2 assuming:1 coulom:1 balance:1 julian:1 schrittwieser:1 difficult:3 unfortunately:1 setup:1 executed:1 potentially:5 relate:1 nord:3 negative:1 append:1 steel:1 design:2 policy:19 perform:4 bianchi:1 upper:7 markov:5 discarded:2 finite:19 behave:1 situation:1 team:3 precise:2 mansour:1 introduced:1 david:3 pair:3 required:3 paris:1 narrow:1 barcelona:1 nip:1 able:1 pattern:1 beating:1 summarize:1 challenge:2 built:1 max:43 memory:4 event:1 difficulty:6 natural:2 valko:2 arm:3 representing:1 improve:2 mdps:6 temporally:1 grewe:1 understanding:1 acknowledgement:1 nicol:1 fully:1 proportional:2 foundation:1 agent:1 sufficient:1 s0:22 principle:2 playing:3 share:3 supported:1 infeasible:1 tsitsiklis:1 bias:6 face:1 taking:2 munos:13 absolute:1 sparse:3 benefit:1 distributed:1 van:1 depth:10 transition:16 world:1 made:3 avg:21 adaptive:4 reinforcement:3 sifre:1 far:1 transaction:1 approximate:1 keep:5 sz:5 global:1 sequentially:1 uncentered:1 colton:1 summing:1 alternatively:1 search:7 why:1 pretty:1 learn:1 nature:1 expanding:1 requested:2 poly:1 european:2 domain:1 marc:1 main:1 whole:2 backup:1 paul:1 verifies:2 child:23 is2:1 fails:1 precision:3 sub:3 exponential:7 vanish:1 forgets:1 dominik:1 theorem:6 down:1 cog:1 specific:6 bastien:3 pac:3 list:3 evidence:1 exists:7 workshop:1 subtree:1 conditioned:1 demand:1 horizon:1 gap:6 logarithmic:2 timothy:1 bubeck:3 expressed:2 applies:1 driessche:1 satisfies:2 dh:1 ma:1 goal:4 formulated:1 consequently:2 infinite:15 except:1 reducing:1 uniformly:2 averaging:1 sylvain:1 kearns:2 max0:1 called:10 total:1 lemma:2 blazing:2 ucb:1 internal:1 princeton:2 |
5,807 | 6,254 | Domain Separation Networks
Konstantinos Bousmalis?
Google Brain
Mountain View, CA
[email protected]
Nathan Silberman
Google Research
New York, NY
[email protected]
George Trigeorgis? ?
Imperial College London
London, UK
[email protected]
Dilip Krishnan
Google Research
Cambridge, MA
[email protected]
Dumitru Erhan
Google Brain
Mountain View, CA
[email protected]
Abstract
The cost of large scale data collection and annotation often makes the application
of machine learning algorithms to new tasks or datasets prohibitively expensive.
One approach circumventing this cost is training models on synthetic data where
annotations are provided automatically. Despite their appeal, such models often
fail to generalize from synthetic to real images, necessitating domain adaptation
algorithms to manipulate these models before they can be successfully applied.
Existing approaches focus either on mapping representations from one domain to
the other, or on learning to extract features that are invariant to the domain from
which they were extracted. However, by focusing only on creating a mapping
or shared representation between the two domains, they ignore the individual
characteristics of each domain. We hypothesize that explicitly modeling what is
unique to each domain can improve a model?s ability to extract domain-invariant
features. Inspired by work on private-shared component analysis, we explicitly
learn to extract image representations that are partitioned into two subspaces: one
component which is private to each domain and one which is shared across domains.
Our model is trained to not only perform the task we care about in the source
domain, but also to use the partitioned representation to reconstruct the images
from both domains. Our novel architecture results in a model that outperforms
the state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
1
Introduction
The recent success of supervised learning algorithms has been partially attributed to the large-scale
datasets [16, 22] on which they are trained. Unfortunately, collecting, annotating, and curating such
datasets is an extremely expensive and time-consuming process. An alternative would be creating
large-scale datasets in non-realistic but inexpensive settings, such as computer generated scenes.
While such approaches offer the promise of effectively unlimited amounts of labeled data, models
trained in such settings do not generalize well to realistic domains. Motivated by this, we examine the
problem of learning representations that are domain?invariant in scenarios where the data distributions
during training and testing are different. In this setting, the source data is labeled for a particular task
and we would like to transfer knowledge from the source to the target domain for which we have no
ground truth labels.
In this work, we focus on the tasks of object classification and pose estimation, where the object of
interest is in the foreground of a given image, for both source and target domains. The source and
?
?
Authors contributed equally.
This work was completed while George Trigeorgis was at Google Brain in Mountain View, CA.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
target pixel distributions can differ in a number of ways. We define ?low-level? differences in the
distributions as those arising due to noise, resolution, illumination and color. ?High-level? differences
relate to the number of classes, the types of objects, and geometric variations, such as 3D position
and pose. We assume that our source and target domains differ mainly in terms of the distribution of
low level image statistics and that they have high level parameters with similar distributions and the
same label space.
We propose a novel architecture, which we call Domain Separation Networks (DSN), to learn domaininvariant representations. Previous work attempts to either find a mapping from representations of
the source domain to those of the target [26], or find representations that are shared between the
two domains [8, 28, 17]. While this, in principle, is a good idea, it leaves the shared representations
vulnerable to contamination by noise that is correlated with the underlying shared distribution [24].
Our model, in contrast, introduces the notion of a private subspace for each domain, which captures
domain specific properties, such as background and low level image statistics. A shared subspace,
enforced through the use of autoencoders and explicit loss functions, captures representations shared
by the domains. By finding a shared subspace that is orthogonal to the subspaces that are private,
our model is able to separate the information that is unique to each domain, and in the process
produce representations that are more meaningful for the task at hand. Our method outperforms the
state-of-the-art domain adaptation techniques on a range of datasets for object classification and pose
estimation, while having an interpretability advantage by allowing the visualization of these private
and shared representations. In Sec. 2, we survey related work and introduce relevant terminology.
Our architecture, loss functions, and learning regime are presented in Sec. 3. Experimental results
and discussion are given in Sec. 4. Finally, conclusions and directions for future work are in Sec. 5.
2
Related Work
Learning to perform unsupervised domain adaptation is an open theoretical and practical problem.
While much prior art exists, our literature review focuses primarily on Convolutional Neural Network
(CNN) based methods due to their empirical superiority on this problem [8, 17, 26, 29]. Ben-David
et al. [4] provide upper bounds on a domain-adapted classifier in the target domain. They introduce
the idea of training a binary classifier trained to distinguish source and target domains. The error
that this ?domain incoherence? classifier provides (along with the error of a source domain specific
classifier) combine to give the overall bounds. Mansour et al. [18] extend the theory of [4] to handle
the case of multiple source domains.
Ganin et al. [7, 8] and Ajakan et al. [2] use adversarial training to find domain?invariant representations in-network. Their Domain?Adversarial Neural Networks (DANN) exhibit an architecture
whose first few feature extraction layers are shared by two classifiers trained simultaneously. The first
is trained to correctly predict task-specific class labels on the source data while the second is trained
to predict the domain of each input. DANN minimizes the domain classification loss with respect
to parameters specific to the domain classifier, while maximizing it with respect to the parameters
that are common to both classifiers. This minimax optimization becomes possible via the use of a
gradient reversal layer (GRL).
Tzeng et al. [29] and Long et al. [17] proposed versions of this model where the maximization of
the domain classification loss is replaced by the minimization of the Maximum Mean Discrepancy
(MMD) metric [11]. The MMD metric is computed between features extracted from sets of samples
from each domain. The Deep Domain Confusion Network by Tzeng et al. [29] has an MMD loss at
one layer in the CNN architecture while Long et al. [17] proposed the Deep Adaptation Network
that has MMD losses at multiple layers.
Other related techniques involve learning a transformation from one domain to the other. In this setup,
the feature extraction pipeline is fixed during the domain adaptation optimization. This has been
applied in various non-CNN based approaches [9, 5, 10] as well as the recent CNN-based Correlation
Alignment (CORAL) [26] algorithm which ?recolors? whitened source features with the covariance
of features from the target domain.
3
Method
While the Domain Separation Networks (DSNs) could in principle be applicable to other learning
tasks, without loss of generalization, we mainly use image classification as the cross-domain task.
Given a labeled dataset in a source domain and an unlabeled dataset in a target domain, our goal is to
train a classifier on data from the source domain that generalizes to the target domain. Like previous
2
Private Target Encoder
Shared Decoder:
Shared Encoder
Classifier
Private Source Encoder
Figure 1: A shared-weight encoder Ec (x) learns to capture representation components for a given
input sample that are shared among domains. A private encoder Ep (x) (one for each domain) learns
to capture domain-specific components of the representation. A shared decoder learns to reconstruct
the input sample by using both the private and source representations. The private and shared
representation components are pushed apart with soft subspace orthogonality constraints Ldifference ,
whereas the shared representation components are kept similar with a similarity loss Lsimilarity .
efforts [7, 8], our model is trained such that the representations of images from the source domain are
similar to those from the target domain. This allows a classifier trained on images from the source
domain to generalize as the inputs to the classifier are in theory invariant to the domain of origin.
However, these representations might trivially include noise that is highly correlated with the shared
representation, as shown by Salzmann et al. [24].
Our main novelty is that, inspired by recent work [14, 24, 30] on shared-space component analysis,
DSNs explicitly model both private and shared components of the domain representations. The two
private components of the representation are specific to each domain and the shared component of the
representation is shared by both domains. To induce the model to produce such split representations,
we add a loss function that encourages independence of these parts. Finally, to ensure that the
private representations are still useful (avoiding trivial solutions) and to add generalizability, we also
add a reconstruction loss. The combination of these objectives is a model that produces a shared
representation that is similar for both domains and a private representation that is domain specific.
By partitioning the space in such a manner, the classifier trained on the shared representation is better
able to generalize across domains as its inputs are uncontaminated with aspects of the representation
that are unique to each domain.
s
Let XS = {(xsi , ysi )}N
i=0 represent a labeled dataset of Ns samples from the source domain where
t
s
t
xi ? DS and let X = {xti }N
i=0 represent an unlabeled dataset of Nt samples from the target domain
t
where xi ? DT . Let Ec (x; ? c ) be a function parameterized by ? c which maps an image x to a hidden
representation hc representing features that are common or shared across domains. Let Ep (x; ? p ) be
an analogous function which maps an image x to a hidden representation hp representing features that
are private to each domain. Let D(h; ? d ) be a decoding function mapping a hidden representation h
? . Finally, G(h; ? g ) represents a task-specific function, parameterized by
to an image reconstruction x
? . The resulting Domain
? g that maps from hidden representations h to the task-specific predictions y
Separation Network (DSN) model is depicted in Fig. 1.
3.1
Learning
? = D(Ec (x) + Ep (x)) and y
? = G(Ec (x)) where x
? is the
Inference in a DSN model is given by x
? is the task-specific prediction. The goal of training is to minimize
reconstruction of the input x and y
the following loss with respect to parameters ? = {? c , ? p , ? d , ? g }:
L = Ltask + ? Lrecon + ? Ldifference + ? Lsimilarity
3
(1)
where ?, ?, ? are weights that control the interaction of the loss terms. The classification loss Ltask
trains the model to predict the output labels we are ultimately interested in. Because we assume the
target domain is unlabeled, the loss is applied only to the source domain. We want to minimize the
negative log-likelihood of the ground truth class for each source domain sample:
Ltask = ?
Ns
X
? si ,
ysi ? log y
(2)
i=0
? si are the softmax
where ysi is the one-hot encoding of the class label for source input i and y
? si = G(Ec (xsi )). We use a scale-invariant mean squared error term [6] for
predictions of the model: y
the reconstruction loss Lrecon which is applied to both domains:
Lrecon =
Ns
X
? si )
Lsi_mse (xsi , x
i=1
+
Nt
X
? ti )
Lsi_mse (xti , x
(3)
i=1
1
1
? ] ? 1k )2 ,
? k22 ? 2 ([x ? x
(4)
kx ? x
k
k
where k is the number of pixels in input x, 1k is a vector of ones of length k; and k ? k22 is the squared
L2 -norm. While a mean squared error loss is traditionally used for reconstruction tasks, it penalizes
predictions that are correct up to a scaling term. Conversely, the scale-invariant mean squared error
penalizes differences between pairs of pixels. This allows the model to learn to reproduce the overall
shape of the objects being modeled without expending modeling power on the absolute color or
intensity of the inputs. We validated that this reconstruction loss was indeed the correct choice
experimentally in Sec. 4.3 by training a version of our best DSN model with the traditional mean
squared error loss instead of the scale-invariant loss in Eq. 3.
?) =
Lsi_mse (x, x
The difference loss is also applied to both domains and encourages the shared and private encoders to
encode different aspects of the inputs. We define the loss via a soft subspace orthogonality constraint
between the private and shared representation of each domain. Let Hsc and Htc be matrices whose
rows are the hidden shared representations hsc = Ec (xs ) and htc = Ec (xt ) from samples of source
and target data respectively. Similarly, let Hsp and Htp be matrices whose rows are the private
representation hsp = Eps (xs ) and htp = Ept (xt ) from samples of source and target data respectively3 .
The difference loss encourages orthogonality between the shared and the private representations:
2
>
2
Ldifference =
Hsc > Hsp
+
Htc Htp
,
(5)
F
F
where k ? k2F is the squared Frobenius norm. Finally, Lsimilarity encourages the hidden representations
hsc and htc from the shared encoder to be as similar as possible irrespective of the domain. We
experimented with two similarity losses, which we discuss in detail.
3.2
Similarity Losses
The domain adversarial similarity loss [7, 8] is used to train a model to produce representations
such that a classifier cannot reliably predict the domain of the encoded representation. Maximizing
such ?confusion? is achieved via a Gradient Reversal Layer (GRL) and a domain classifier trained
to predict the domain producing the hidden representation. The GRL has the same output as the
identity function, but reverses the gradient direction. Formally, for some function f (u), the GRL
d
d
is defined as Q (f (u)) = f (u) with a gradient du
Q(f (u)) = ? du
f (u). The domain classifier
Z(Q(hc ); ? z ) ? d? parameterized by ? z maps a shared representation vector hc = Ec (x; ? c ) to a
prediction of the label d? ? {0, 1} of the input sample x. Learning with a GRL is adversarial in that
? z is optimized to increase Z?s ability to discriminate between encodings of images from the source
or target domains, while the reversal of the gradient results in the model parameters ? c learning
representations from which domain classification accuracy is reduced. Essentially, we maximize the
binomial cross-entropy for the domain prediction task with respect to ? z , while minimizing it with
respect to ? c :
NX
s +Nt n
o
LDANN
=
di log d?i + (1 ? di ) log(1 ? d?i ) .
(6)
similarity
i=0
3
The matrices are transformed to have zero mean and unit l2 norm.
4
where di ? {0, 1} is the ground truth domain label for sample i.
The Maximum Mean Discrepancy (MMD) loss [11] is a kernel-based distance function between pairs
of samples. We use a biased statistic for the squared population MMD between shared encodings of
the source samples hsc and the shared encodings of the target samples htc :
s
s
LMMD
similarity
t
t
NX
,N
N
N
X
X
2
1
1
s
s
s
t
?(h
,
h
)
?
?(h
,
h
)
+
?(htci , htcj ), (7)
=
ci
cj
ci
cj
(N s )2 i,j=0
N s N t i,j=0
(N t )2 i,j=0
where ?(?, ?) is a PSD kernel
P function. In our experiments we used a linear combination of multiple
RBF kernels: ?(xi , xj ) = n ?n exp{? 2?1n kxi ? xj k2 }, where ?n is the standard deviation and ?n
is the weight for our nth RBF kernel. Any additional kernels we include in the multi?RBF kernel are
additive and guarantee that their linear combination remains characteristic. Therefore, having a large
range of kernels is beneficial since the distributions of the shared features change during learning,
and different components of the multi?RBF kernel might be responsible at different times for making
sure we reject a false null hypothesis, i.e. that the loss is sufficiently high when the distributions are
not similar [17]. The advantage of using an RBF kernel with the MMD distance is that the Taylor
expansion of the Gaussian function allows us to match all the moments of the two populations. The
caveat is that it requires finding optimal kernel bandwidths ?n .
4
Evaluation
We are motivated by the problem of learning models on a clean, synthetic dataset and testing on noisy,
real?world dataset. To this end, we evaluate on object classification datasets used in previous work4
including MNIST and MNIST-M [8], the German Traffic Signs Recognition Benchmark (GTSRB)
[25], and the Streetview House Numbers (SVHN) [20]. We also evaluate on the cropped LINEMOD
dataset, a standard for object instance recognition and 3D pose estimation [12, 31], for which we
have synthetic and real data5 . We tested the following unsupervised domain adaptation scenarios: (a)
from MNIST to MNIST-M; (b) from SVHN to MNIST; (c) from synthetic traffic signs to real ones
with GTSRB; (d) from synthetic LINEMOD object instances rendered on a black background to the
same object instances in the real world.
We evaluate the efficacy of our method with each of the two similarity losses outlined in Sec. 3.2 by
comparing against the prevailing visual domain adaptation techniques for neural networks: Correlation Alignment (CORAL) [26], Domain-Adversarial Neural Networks (DANN) [7, 8], and MMD
regularization [29, 17]. For each scenario we provide two additional baselines: the performance on
the target domain of the respective model with no domain adaptation and trained (a) on the source
domain (?Source-only? in Tab. 1) and (b) on the target domain (?Target-only?), as an empirical
lower and upper bound respectively.
We have not found a universally applicable way to optimize hyperparameters for unsupervised domain
adaptation. Previous work [8] suggests the use of reverse validation. We implemented this (see
Supplementary Material for details) but found that that the reverse validation accuracy did not always
align well with test accuracy. Ideally we would like to avoid using labels from the target domain,
as it can be argued that if ones does have target domain labels, they should be used during training.
However, there are applications where a labeled target domain set cannot be used for training. An
example is the labeling of a dataset with the use of AprilTags [21], 2D barcodes that can be used to
label the pose of an object, provided that a camera is calibrated and the physical dimensions of the
barcode are known. These images should not be used when learning features from pixels, because the
model might be able to decipher the tags. However, they can be part of a test set that is not available
during training, and an equivalent dataset without the tags could be used for unsupervised domain
adaptation. We thus chose to use a small set of labeled target domain data as a validation set for
4
The most commonly used dataset for visual domain adaptation in the context of object classification is
Office [23]. However, this dataset exhibits significant variations in both low-level and high-level parameter
distributions. Low-level variations are due to the different cameras and background textures in the images (e.g.
Amazon versus DSLR). However, there are significant high-level variations due to object identity: e.g. the
motorcycle class contains non-motorcycle objects; the backpack class contains a laptop; some domains contain
the object in only one pose. Other commonly used datasets such as Caltech-256 suffer from similar problems.
We therefore exclude these datasets from our evaluation. For more information, see our Supplementary Material.
5
https://cvarlab.icg.tugraz.at/projects/3d_object_detection/
5
Table 1: Mean classification accuracy (%) for the unsupervised domain adaptation scenarios we
evaluated all the methods on. We have replicated the experiments from Ganin et al. [8] and in
parentheses we show the results reported in their paper. The ?Source-only? and ?Target-only? rows
are the results on the target domain when using no domain adaptation and training only on the source
or the target domain respectively.
Model
Source-only
CORAL [26]
MMD [29, 17]
DANN [8]
DSN w/ MMD (ours)
DSN w/ DANN (ours)
Target-only
MNIST to
MNIST-M
56.6 (52.2)
57.7
76.9
77.4 (76.6)
80.5
83.2
98.7
Synth Digits to
SVHN
86.7 (86.7)
85.2
88.0
90.3 (91.0)
88.5
91.2
92.4
SVHN to
MNIST
59.2 (54.9)
63.1
71.1
70.7 (73.8)
72.2
82.7
99.5
Synth Signs to
GTSRB
85.1 (79.0)
86.9
91.1
92.9 (88.6)
92.6
93.1
99.8
the hyperparameters of all the methods we compare. All methods were evaluated using the same
protocol, so comparison numbers are fair and meaningful. The performance on this validation set
can serve as an upper bound of a satisfactory validation metric for unsupervised domain adaptation,
which to our knowledge validating the parameters in an unsupervised manner is still an open research
question, and out of the scope of this work.
4.1
Datasets and Adaptation Scenarios
MNIST to MNIST-M. In this domain adaptation scenario we use the popular MNIST [15] dataset
of handwritten digits as the source domain, and MNIST-M, a variation of MNIST proposed for
unsupervised domain adaptation by [8]. MNIST-M was created by using each MNIST digit as a
binary mask and inverting with it the colors of a background image. The background images are
random crops uniformly sampled from the Berkeley Segmentation Data Set (BSDS500) [3]. In all
our experiments, following the experimental protocol by [8]. Out of the 59, 001 MNIST-M training
examples, we used the labels for 1, 000 of them to find optimal hyperparameters for our models. This
scenario, like all three digit adaptation scenarios, has 10 class labels.
Synthetic Digits to SVHN. In this scenario we aim to learn a classifier for the Street-View House
Number data set (SVHN) [20], our target domain, from a dataset of purely synthesized digits,
our source domain. The synthetic digits [8] dataset was created by rasterizing bitmap fonts in a
sequence (one, two, and three digits) with the ground truth label being the digit in the center of the
image, just like in SVHN. The source domain samples are further augmented by variations in scale,
translation, background colors, stroke colors, and Gaussian blurring. We use 479, 400 Synthetic
Digits for our source domain training set, 73, 257 unlabeled SVHN samples for domain adaptation,
and 26, 032 SVHN samples for testing. Similarly to above, we use the labels of 1, 000 SVHN training
examples for hyperparameter validation.
SVHN to MNIST. Although the SVHN dataset contains significant variations (in scale, background
clutter, blurring, embossing, slanting, contrast, rotation, sequences to name a few) there is not a lot of
variation in the actual digits shapes. This makes it quite distinct from a dataset of handwritten digits,
like MNIST, where there are a lot of elastic distortions in the shapes, variations in thickness, and
noise on the digits themselves. Since the ground truth digits in both datasets are centered, this is a
well-posed and rather difficult domain adaptation scenario. As above, we used the labels of 1, 000
MNIST training examples for validation.
Synthetic Signs to GTSRB. We also perform an experiment using a dataset of synthetic traffic
signs from [19] to real world dataset of traffic signs (GTSRB) [25]. While the three-digit adaptation
scenarios have 10 class labels, this scenario has 43 different traffic signs. The synthetic signs were
obtained by taking relevant pictograms and adding various types of variations, including random
backgrounds, brightness, saturation, 3D rotations, Gaussian and motion blur. We use 90, 000 synthetic
signs for training, 1, 280 random GTSRB real-world signs for domain adaptation and validation, and
the remaining 37, 929 GTSRB real signs as the test set.
6
Table 2: Mean classification accuracy and pose error for the ?Synth Objects to LINEMOD? scenario.
Method
Classification Accuracy Mean Angle Error
Source-only
47.33%
89.2?
MMD
72.35%
70.62?
DANN
99.90%
56.58?
DSN w/ MMD (ours)
99.72%
66.49?
DSN w/ DANN (ours)
100.00%
53.27?
Target-only
100.00%
6.47?
Synthetic Objects to LineMod. The LineMod dataset [31] consists of CAD models of objects in a
cluttered environment and a high variance of 3D poses for each object. We use the 11 non-symmetric
objects from the cropped version of the dataset, where the images are cropped with the object in the
center, for the task of object instance recognition and 3D pose estimation. We train our models on
16, 962 images for these objects rendered on a black background without additional noise. We use a
target domain training set of 10, 673 real-world images for domain adaptation and validation, and a
target domain test set of 2, 655 for testing. For this scenario our task is both classification and pose
PNs
? si + ? log(1 ? |qs ? q
? s |)}, where qs
estimation; our task loss is therefore Ltask = i=0
{?ysi ? log y
s
? is the equivalent
is the positive unit quaternion vector representing the ground truth 3D pose, and q
prediction. The first term is the classification loss, similar to the rest of the experiments, the second
term is the log of a 3D rotation metric for quaternions [13], and ? is the weight for the pose loss. In
Tab. 2 we report the mean angle the object would need to be rotated (on a fixed 3D axis) to move
from the predicted to the ground truth pose [12].
(a) MNIST (source)
(b) MNIST-M (target)
(c) Synth Objects (source)
(d) LINEMOD (target)
Figure 2: Reconstructions for the representations of the two domains for ?MNIST to MNIST-M?
and for ?Synth Objects to LINEMOD?. In each block from left to right: the original image xt ;
reconstructed image D(Ec (xt ) + Ep (xt )); shared only reconstruction D(Ec (xt )); private only
reconstruction D(Ep (xt )).
4.2
Implementation Details
All the models were implemented using TensorFlow 6 [1] and were trained with Stochastic Gradient
Descent plus momentum [27]. Our initial learning rate was multiplied by 0.9 every 20, 000 steps
(mini-batches). We used batches of 32 samples from each domain for a total of 64 and the input
images were mean-centered and rescaled to [?1, 1]. In order to avoid distractions for the main
classification task during the early stages of the training procedure, we activate any additional domain
adaptation loss after 10, 000 steps of training. For all our experiments our CNN topologies are based
on the ones used in [8], to be comparable to previous work in unsupervised domain adaptation. The
exact architectures for all models are shown in our Supplementary Material.
In our framework, CORAL [26] would be equivalent to fixing our shared representation matrices
>
Hsc and Htc , normalizing them and then minimizing kAHsc > Hsc A> ? Htc Htc k2F with respect to a
weight matrix A that aligns the two correlation matrices. For the CORAL experiments, we follow
the suggestions of [26], and extract features for both source and target domains from the penultimate
layer of each network. Once the correlation matrices for each domain are aligned, we evaluate on
6
We provide code at https://github.com/tensorflow/models/domain_adaptation.
7
Table 3: Effect of our difference and reconstruction losses on our best model. The first row is
replicated from Tab. 1. In the second row, we remove the soft orthogonality constraint. In the third
row, we replace the scale-invariant MSE with regular MSE.
Model
All terms
No Ldifference
With LL2
recon
MNIST to
MNIST-M
83.23
80.26
80.42
Synth. Digits to
SVHN
91.22
89.21
88.98
SVHN to
MNIST
82.78
80.54
79.45
Synth. Signs to
GTSRB
93.01
91.89
92.11
the target test data the performance of a linear support vector machine (SVM) classifier trained on
the source training data. The SVM penalty parameter was optimized based on the target domain
validation set for each of our domain adaptation scenarios. For MMD regularization, we used a linear
combination of 19 RBF kernels (details can be found in the Supplementary Material). Preliminary
experiments with having MMD applied on more than one layers did not show any performance
improvement for our experiments and architectures. For DANN regularization, we applied the GRL
and the domain classifier as prescribed in [8] for each scenario.
For our Domain Separation Network experiments, our similarity losses are always applied at the
first fully connected layer of each network after a number of convolutional and max pooling layers.
For each private space encoder network we use a simple convolutional and max pooling structure
followed by a fully-connected layer with a number of nodes equal to the number of nodes at the final
layer hc of the equivalent shared encoder Ec . The output of the shared and private encoders gets
added before being fed to the shared decoder D.
4.3
Discussion
The DSN with DANN model outperforms all the other methods we experimented with for all our
unsupervised domain adaptation scenarios (see Tab. 1 and 2). Our unsupervised domain separation
networks are able to improve both upon MMD regularization and DANN. Using DANN as a similarity
loss (Eq. 6) worked better than using MMD (Eq. 7) as a similarity loss, which is consistent with
results obtained for domain adaptation using MMD regularization and DANN alone.
In order to examine the effect of the soft orthogonality constraints (Ldifference ), we took our best
model, our DSN model with the DANN loss, and removed these constraints by setting the ? coefficient
to 0. Without them, the model performed consistently worse in all scenarios. We also validated our
choice of our scale-invariant mean squared error reconstruction loss as opposed to the more popular
1
? ||22 . With this variation
mean squared error loss by running our best model with LL2
recon = k ||x ? x
we also get worse classification results consistently, as shown in experiments from Tab. 3.
The shared and private representations of each domain are combined for the reconstruction of samples.
Individually decoding the shared and private representations gives us reconstructions that serve as
useful depictions of our domain adaptation process. In Fig. 2 we use the ?MNIST to MNIST-M? and
the ?Synth. Objects to LINEMOD? scenarios for such visualizations. In the former scenario, the
model cleanly separates the foreground from the background and produces a shared space that is very
similar to the source domain. This is expected since the target is a transformation of the source. In the
latter scenario, the model is able to produce visualizations of the shared representation that look very
similar between source and target domains, which are useful for classification and pose estimation.
5
Conclusion
We present in this work a deep learning model that improves upon existing unsupervised domain
adaptation techniques. The model does so by explicitly separating representations private to each
domain and shared between source and target domains. By using existing domain adaptation
techniques to make the shared representations similar, and soft subspace orthogonality constraints to
make private and shared representations dissimilar, our method outperforms all existing unsupervised
domain adaptation methods in a number of adaptation scenarios that focus on the synthetic-to-real
paradigm.
8
Acknowledgments
We would like to thank Samy Bengio, Kevin Murphy, and Vincent Vanhoucke for valuable comments
on this work. We would also like to thank Yaroslav Ganin and Paul Wohlhart for providing some of
the datasets we used.
References
[1] M. Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. Preprint
arXiv:1603.04467, 2016.
[2] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand. Domain-adversarial neural
networks. In Preprint, http://arxiv.org/abs/1412.4446, 2014.
[3] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
TPAMI, 33(5):898?916, 2011.
[4] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning
from different domains. Machine learning, 79(1-2):151?175, 2010.
[5] R. Caseiro, J. F. Henriques, P. Martins, and J. Batist. Beyond the shortest path: Unsupervised Domain
Adaptation by Sampling Subspaces Along the Spline Flow. In CVPR, 2015.
[6] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep
network. In NIPS, pages 2366?2374, 2014.
[7] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages
513?520, 2015.
[8] Y. Ganin et al. . Domain-Adversarial Training of Neural Networks. JMLR, 17(59):1?35, 2016.
[9] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In
CVPR, pages 2066?2073. IEEE, 2012.
[10] R. Gopalan, R. Li, and R. Chellappa. Domain Adaptation for Object Recognition: An Unsupervised
Approach. In ICCV, 2011.
[11] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?lkopf, and A. Smola. A Kernel Two-Sample Test.
JMLR, pages 723?773, 2012.
[12] S. Hinterstoisser et al. . Model based training, detection and pose estimation of texture-less 3d objects in
heavily cluttered scenes. In ACCV, 2012.
[13] D. Q. Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of Mathematical Imaging and
Vision, 35(2):155?164, 2009.
[14] Y. Jia, M. Salzmann, and T. Darrell. Factorized latent spaces with structured sparsity. In NIPS, pages
982?990, 2010.
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
coco: Common objects in context. In ECCV 2014, pages 740?755. Springer, 2014.
[17] M. Long and J. Wang. Learning transferable features with deep adaptation networks. ICML, 2015.
[18] Y. Mansour et al. . Domain adaptation with multiple sources. In NIPS, 2009.
[19] B. Moiseev, A. Konev, A. Chigorin, and A. Konushin. Evaluation of Traffic Sign Recognition Methods
Trained on Synthetically Generated Data, chapter ACIVS, pages 576?583. 2013.
[20] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with
unsupervised feature learning. In NIPS Workshops, 2011.
[21] E. Olson. Apriltag: A robust and flexible visual fiducial system. In Robotics and Automation (ICRA), 2011
IEEE International Conference on, pages 3400?3407. IEEE, 2011.
[22] O. Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211?252, 2015.
[23] K. Saenko et al. . Adapting visual category models to new domains. In ECCV. Springer, 2010.
[24] M. Salzmann et. al. Factorized orthogonal latent spaces. In AISTATS, pages 701?708, 2010.
[25] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning
algorithms for traffic sign recognition. Neural Networks, 2012.
[26] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In AAAI. 2016.
[27] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in
deep learning. In ICML, pages 1139?1147, 2013.
[28] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In
CVPR, pages 4068?4076, 2015.
[29] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for
domain invariance. Preprint arXiv:1412.3474, 2014.
[30] S. Virtanen, A. Klami, and S. Kaski. Bayesian CCA via group sparsity. In ICML, pages 457?464, 2011.
[31] P. Wohlhart and V. Lepetit. Learning descriptors for object recognition and 3d pose estimation. In CVPR,
pages 3109?3118, 2015.
9
| 6254 |@word private:27 version:3 cnn:5 norm:3 open:2 cleanly:1 covariance:1 brightness:1 lepetit:1 moment:1 initial:1 contains:3 efficacy:1 salzmann:3 ours:4 document:1 outperforms:4 existing:4 bitmap:1 com:5 nt:3 comparing:1 cad:1 si:5 realistic:2 additive:1 blur:1 shape:3 hypothesize:1 remove:1 v:1 alone:1 leaf:1 bissacco:1 caveat:1 provides:1 node:2 org:1 zhang:1 mathematical:1 along:2 dsn:10 abadi:1 consists:1 ijcv:1 combine:1 manner:2 introduce:2 mask:1 expected:1 indeed:1 themselves:1 examine:2 multi:3 brain:3 inspired:2 automatically:1 xti:2 actual:1 becomes:1 provided:2 spain:1 underlying:1 project:1 laptop:1 factorized:2 null:1 what:1 mountain:3 minimizes:1 finding:2 transformation:2 guarantee:1 berkeley:1 every:1 collecting:1 ti:1 prohibitively:1 classifier:18 k2:1 uk:2 partitioning:1 control:1 unit:2 grauman:1 superiority:1 producing:1 ramanan:1 before:2 positive:1 virtanen:1 despite:1 encoding:4 incoherence:1 path:1 ajakan:2 might:3 black:2 chose:1 plus:1 initialization:1 conversely:1 suggests:1 range:3 igel:1 unique:3 practical:1 responsible:1 testing:4 camera:2 acknowledgment:1 block:1 lecun:1 backpropagation:1 digit:17 procedure:1 maire:2 empirical:2 reject:1 adapting:1 induce:1 regular:1 get:2 cannot:2 unlabeled:4 context:2 vaughan:1 optimize:1 equivalent:4 map:5 marten:1 center:2 maximizing:3 shi:1 cluttered:2 survey:1 resolution:1 amazon:1 q:2 population:2 handle:1 notion:1 variation:11 traditionally:1 analogous:1 grl:6 target:41 heavily:1 exact:1 samy:1 hypothesis:1 origin:1 expensive:2 recognition:9 labeled:6 ep:5 preprint:3 wang:2 capture:4 connected:2 sun:1 contamination:1 rescaled:1 removed:1 valuable:1 environment:1 ideally:1 geodesic:1 ultimately:1 trained:15 hsc:7 streetview:1 htc:8 serve:2 purely:1 upon:2 blurring:2 ll2:2 various:2 chapter:1 kaski:1 train:4 distinct:1 london:2 activate:1 chellappa:1 labeling:1 kevin:1 whose:3 encoded:1 supplementary:4 quite:1 posed:1 distortion:1 cvpr:4 reconstruct:2 annotating:1 encoder:8 ability:2 statistic:3 noisy:1 final:1 advantage:2 sequence:2 tpami:1 took:1 propose:1 reconstruction:13 interaction:1 adaptation:42 relevant:2 motorcycle:2 aligned:1 frobenius:1 olson:1 sutskever:1 darrell:3 produce:7 bousmalis:1 ben:2 object:30 rotated:1 blitzer:1 ac:1 fixing:1 gong:1 pose:16 ganin:5 eq:3 implemented:2 predicted:1 revers:1 larochelle:1 differ:2 direction:2 rasch:1 lsimilarity:3 correct:2 stochastic:1 centered:2 material:4 argued:1 generalization:1 preliminary:1 slanting:1 sufficiently:1 ground:7 exp:1 mapping:4 predict:5 scope:1 early:1 estimation:8 applicable:2 label:16 individually:1 successfully:1 hoffman:2 minimization:1 htp:3 gaussian:3 always:2 aim:1 rather:1 avoid:2 office:1 encode:1 validated:2 focus:4 improvement:1 consistently:2 likelihood:1 mainly:2 contrast:2 adversarial:7 ept:1 baseline:1 dilip:1 inference:1 hidden:7 perona:1 reproduce:1 transformed:1 interested:1 pixel:4 overall:2 classification:17 among:1 flexible:1 art:3 softmax:1 tzeng:4 prevailing:1 equal:1 once:1 having:3 extraction:2 sampling:1 ng:1 represents:1 look:1 k2f:2 unsupervised:19 icml:4 foreground:2 discrepancy:2 future:1 report:1 spline:1 primarily:1 few:2 simultaneously:1 individual:1 murphy:1 replaced:1 microsoft:1 attempt:1 psd:1 ab:1 detection:2 interest:1 highly:1 evaluation:3 alignment:2 introduces:1 netzer:1 respective:1 orthogonal:2 taylor:1 penalizes:2 theoretical:1 instance:4 modeling:2 soft:5 schlipsing:1 maximization:1 cost:2 deviation:1 reported:1 encoders:2 thickness:1 generalizability:1 kxi:1 synthetic:15 trigeorgis:3 calibrated:1 combined:1 caseiro:1 borgwardt:1 international:1 decoding:2 barcodes:1 squared:9 aaai:1 opposed:1 worse:2 creating:2 return:1 li:1 exclude:1 yaroslav:1 sec:6 automation:1 coefficient:1 explicitly:4 dann:13 performed:1 view:4 lot:2 tab:5 traffic:7 annotation:2 jia:1 minimize:2 accuracy:6 convolutional:3 variance:1 characteristic:2 descriptor:1 generalize:4 decipher:1 handwritten:2 vincent:1 lkopf:1 bayesian:1 russakovsky:1 stroke:1 icg:1 simultaneous:1 dslr:1 aligns:1 inexpensive:1 against:1 uncontaminated:1 attributed:1 di:3 sampled:1 dataset:20 popular:2 knowledge:2 color:5 improves:1 cj:2 segmentation:2 focusing:1 dt:1 supervised:1 follow:1 evaluated:2 just:1 stage:1 smola:1 autoencoders:1 correlation:4 hand:1 d:1 google:9 name:1 effect:2 k22:2 contain:1 former:1 regularization:5 symmetric:1 satisfactory:1 during:6 encourages:4 huynh:1 transferable:1 necessitating:1 confusion:3 motion:1 svhn:14 image:27 novel:2 common:3 rotation:4 physical:1 extend:1 interpretation:1 lrecon:3 synthesized:1 eps:1 significant:3 cambridge:1 trivially:1 outlined:1 hp:1 similarly:2 similarity:10 depiction:1 add:3 align:1 recent:3 apart:1 reverse:2 scenario:23 pns:1 hay:1 coco:1 binary:2 success:1 caltech:1 george:2 care:1 additional:4 novelty:1 maximize:1 paradigm:1 shortest:1 multiple:4 expending:1 gretton:1 match:1 offer:1 long:3 cross:2 lin:1 manipulate:1 equally:1 parenthesis:1 prediction:8 crop:1 xsi:3 whitened:1 essentially:1 metric:5 heterogeneous:1 vision:1 arxiv:3 represent:2 kernel:13 mmd:17 achieved:1 curating:1 robotics:1 background:10 whereas:1 want:1 cropped:3 source:45 sch:1 biased:1 rest:1 klami:1 sure:1 comment:1 pooling:2 validating:1 flow:2 call:1 synthetically:1 split:1 bengio:2 easy:1 krishnan:1 independence:1 xj:2 architecture:7 bandwidth:1 topology:1 barcode:1 idea:2 haffner:1 konstantinos:2 motivated:2 effort:1 penalty:1 suffer:1 york:1 wohlhart:2 deep:8 useful:3 gopalan:1 involve:1 amount:1 clutter:1 recon:2 category:1 reduced:1 http:3 coates:1 backpack:1 sign:14 arising:1 correctly:1 hyperparameter:1 promise:1 group:1 terminology:1 imperial:2 clean:1 dahl:1 kept:1 imaging:1 circumventing:1 enforced:1 angle:2 parameterized:3 salmen:1 wu:1 separation:6 scaling:1 data5:1 pushed:1 comparable:1 cca:1 bound:4 layer:11 followed:1 distinguish:1 marchand:1 adapted:1 orthogonality:6 constraint:6 worked:1 scene:2 unlimited:1 tag:2 nathan:1 aspect:2 extremely:1 prescribed:1 rendered:2 martin:1 structured:1 combination:4 across:4 beneficial:1 partitioned:2 making:1 invariant:10 iccv:1 pipeline:1 visualization:4 remains:1 discus:1 german:1 fail:1 fed:1 reversal:3 end:1 generalizes:1 available:1 doll:1 multiplied:1 hierarchical:1 fowlkes:1 alternative:1 batch:2 eigen:1 original:1 binomial:1 bsds500:1 include:2 ensure:1 completed:1 tugraz:1 remaining:1 running:1 laviolette:1 coral:5 icra:1 silberman:1 feng:1 objective:1 move:1 question:1 added:1 malik:1 font:1 sha:1 traditional:1 fiducial:1 exhibit:2 gradient:7 subspace:9 distance:2 separate:2 thank:2 separating:1 penultimate:1 decoder:3 hsp:3 nx:2 street:1 arbelaez:1 trivial:1 length:1 code:1 modeled:1 mini:1 providing:1 minimizing:2 setup:1 unfortunately:1 difficult:1 relate:1 synth:8 negative:1 implementation:1 reliably:1 perform:3 contributed:1 allowing:1 upper:3 datasets:11 benchmark:1 enabling:1 descent:1 accv:1 hinton:1 mansour:2 intensity:1 david:2 inverting:1 pair:2 germain:1 optimized:2 imagenet:1 puhrsch:1 tensorflow:3 barcelona:1 nip:5 able:5 beyond:1 regime:1 kulesza:1 sparsity:2 reading:1 challenge:1 saturation:1 interpretability:1 including:2 max:2 hot:1 power:1 natural:1 nth:1 minimax:1 representing:3 improve:2 github:1 axis:1 irrespective:1 created:2 extract:4 review:1 prior:1 geometric:1 literature:1 l2:2 loss:39 fully:2 suggestion:1 versus:1 validation:10 vanhoucke:1 consistent:1 principle:2 translation:1 row:6 eccv:2 henriques:1 taking:1 absolute:1 distributed:1 dimension:1 depth:1 world:5 contour:1 author:1 collection:1 commonly:2 universally:1 replicated:2 erhan:1 ec:11 reconstructed:1 ignore:1 belongie:1 consuming:1 xi:3 fergus:1 dilipkay:1 latent:2 frustratingly:1 table:3 additionally:1 learn:4 transfer:2 robust:1 ca:3 elastic:1 du:2 expansion:1 hc:4 mse:2 bottou:1 domain:154 protocol:2 zitnick:1 did:2 aistats:1 main:2 noise:5 hyperparameters:3 paul:1 fair:1 augmented:1 fig:2 benchmarking:1 ny:1 n:3 position:1 momentum:2 explicit:1 pereira:1 house:2 jmlr:2 third:1 learns:3 dumitru:2 specific:10 xt:7 appeal:1 x:3 experimented:2 svm:2 normalizing:1 exists:1 workshop:1 mnist:28 false:1 adding:1 effectively:1 importance:1 ci:2 texture:2 illumination:1 kx:1 entropy:1 depicted:1 ysi:4 visual:5 partially:1 vulnerable:1 springer:2 stallkamp:1 truth:7 extracted:2 ma:1 lempitsky:1 goal:2 identity:2 rbf:6 shared:48 replace:1 man:1 experimentally:1 change:1 uniformly:1 total:1 discriminate:1 invariance:1 experimental:2 meaningful:2 saenko:4 formally:1 college:1 distraction:1 support:1 latter:1 crammer:1 quaternion:2 dissimilar:1 evaluate:4 tested:1 avoiding:1 correlated:2 |
5,808 | 6,255 | Conditional Generative Moment-Matching Networks
Yong Ren, Jialian Li, Yucen Luo, Jun Zhu?
Dept. of Comp. Sci. & Tech., TNList Lab; Center for Bio-Inspired Computing Research
State Key Lab for Intell. Tech. & Systems, Tsinghua University, Beijing, China
{renyong15, luoyc15, jl12}@mails.tsinghua.edu.cn; [email protected]
Abstract
Maximum mean discrepancy (MMD) has been successfully applied to learn deep
generative models for characterizing a joint distribution of variables via kernel
mean embedding. In this paper, we present conditional generative momentmatching networks (CGMMN), which learn a conditional distribution given some
input variables based on a conditional maximum mean discrepancy (CMMD) criterion. The learning is performed by stochastic gradient descent with the gradient calculated by back-propagation. We evaluate CGMMN on a wide range of
tasks, including predictive modeling, contextual generation, and Bayesian dark
knowledge, which distills knowledge from a Bayesian model by learning a relatively small CGMMN student network. Our results demonstrate competitive performance in all the tasks.
1
Introduction
Deep generative models (DGMs) characterize the distribution of observations with a multilayered
structure of hidden variables under nonlinear transformations. Among various deep learning methods, DGMs are natural choice for those tasks that require probabilistic reasoning and uncertainty
estimation, such as image generation [1], multimodal learning [30], and missing data imputation.
Recently, the predictive power, which was often shown inferior to pure recognition networks (e.g.,
deep convolutional networks), has also been significantly improved by employing the discriminative
max-margin learning [18].
For the arguably more challenging unsupervised learning, [5] presents a generative adversarial network (GAN), which adopts a game-theoretical min-max optimization formalism. GAN has been
extended with success in various tasks [21, 1]. However, the min-max formalism is often hard to
solve. The recent work [19, 3] presents generative moment matching networks (GMMN), which has
a simpler objective function than GAN while retaining the advantages of deep learning. GMMN defines a generative model by sampling from some simple distribution (e.g., uniform) followed through
a parametric deep network. To learn the parameters, GMMN adopts maximum mean discrepancy
(MMD) [7], a moment matching criterion where kernel mean embedding techniques are used to
avoid unnecessary assumptions of the distributions. Back-propagation can be used to calculate the
gradient as long as the kernel function is smooth.
A GMMN network estimates the joint distribution of a set of variables. However, we are more
interested in a conditional distribution in many cases, including (1) predictive modeling: compared
to a generative model that defines the joint distribution p(x, y) of input data x and response variable
y, a conditional model p(y|x) is often more direct without unnecessary assumptions on modeling x,
and leads to better performance with fewer training examples [23, 16]; (2) contextual generation: in
some cases, we are interested in generating samples based on some context, such as class labels [21],
visual attributes [32] or the input information in cross-modal generation (e.g., from image to text [31]
?
Corresponding author
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
or vice versa [2]); and (3) building large networks: conditional distributions are essential building
blocks of a large generative probabilistic model. One recent relevant work [1] provides a good
example of stacking multiple conditional GAN networks [21] in a Laplacian pyramid structure to
generate natural images.
In this paper, we present conditional generative moment-matching networks (CGMMN) to learn a
flexible conditional distribution when some input variables are given. CGMMN largely extends the
capability of GMMN to address a wide range of application problems as mentioned above, while
keeping the training process simple. Specifically, CGMMN admits a simple generative process,
which draws a sample from a simple distribution and then passes the sample as well as the given
conditional variables through a deep network to generate a target sample. To learn the parameters, we develop conditional maximum mean discrepancy (CMMD), which measures the HilbertSchmidt norm (generalized Frobenius norm) between the kernel mean embedding of an empirical
conditional distribution and that of our generative model. Thanks to the simplicity of the conditional generative model, we can easily draw a set of samples to estimate the kernel mean embedding
as well as the CMMD objective. Then, optimizing the objective can be efficiently implemented
via back-propagation. We evaluate CGMMN in a wide range of tasks, including predictive modeling, contextual generation, and Bayesian dark knowledge [15], an interesting case of distilling dark
knowledge from Bayesian models. Our results on various datasets demonstrate that CGMMN can
obtain competitive performance in all these tasks.
2
Preliminary
In this section, we briefly review some preliminary knowledge, including maximum mean discrepancy (MMD) and kernel embedding of conditional distributions.
2.1 Hilbert Space Embedding
We begin by providing an overview of Hilbert space embedding, where we represent distributions
by elements in a reproducing kernel Hilbert space (RKHS). A RKHS F on X with kernel k is a
Hilbert space of functions f : X ? R. Its inner product ??, ??F satisfies the reproducing property:
?f (?), k(x, ?)?F = f (x). Kernel functions are not restricted on Rd . They can also be defined on
graphs, time series and structured objects [11]. We usually view ?(x) := k(x, ?) as a (usually
infinite dimension) feature map of x. The most interesting part is that we can embed a distribution
by taking expectation on its feature map:
!
?X := EX [?(X)] =
?(X)dP (X).
?
If EX [k(X, X)] ? ?, ?X is guaranteed to be an element in the RKHS. This kind of kernel mean
embedding provides us another perspective on manipulating distributions whose parametric forms
are not assumed, as long as we can draw samples from them. This technique has been widely applied
in many tasks, including feature extractor, density estimation and two-sample test [27, 7].
2.2
Maximum Mean Discrepancy
M
Let X = {xi }N
i=1 and Y = {yi }j=1 be the sets of samples from distributions PX and PY , respectively. Maximum Mean Discrepancy (MMD), also known as kernel two sample test [7], is a
frequentist estimator to answer the query whether PX = PY based on the observed samples. The
basic idea behind MMD is that if the generating distributions are identical, all the statistics are the
same. Formally, MMD defines the following difference measure:
MMD[K, PX , PY ] := sup (EX [f (X)] ? EY [f (Y )]),
f ?K
where K is a class of functions. [7] found that the class of functions in a universal RKHS F is rich
enough to distinguish any two distributions and MMD can be expressed as the difference of their
mean embeddings. Here, universality requires that k(?, ?) is continuous and F is dense in C(X)
with respect to the L? norm, where C(X) is the space of bounded continuous functions on X. We
summarize the result in the following theorem:
Theorem 1 [7] Let K be a unit ball in a universal RKHS F, defined on the compact metric space
X , with an associated continuous kernel k(?, ?). When the mean embedding ?p , ?q ? F, the MMD
objective function can be expressed as MMD[K, p, q] = ??p ? ?q ?2F . Besides, MMD[K, p, q] = 0 if
and only if p = q.
2
In practice, an estimate of the MMD objective compares the square difference between the empirical
kernel mean embeddings:
#
#2
# $
#
N
M
$
#
#
1
1
2
"
#
LMMD = #
?(xi ) ?
?(yi )#
# ,
M j=1
# N i=1
#
F
which can be easily evaluated by expanding the square and using the associated kernel k(?, ?).
Asymptotically, L"2MMD is an unbiased estimator.
2.3
Kernel Embedding of Conditional Distributions
The
% kernel embedding of a conditional distribution P (Y |X) is defined as: ?Y |x := EY |x [?(Y )] =
?(y)dP (y|x). Unlike the embedding of a single distribution, the embedding of a conditional
?
distribution is not a single element in RKHS, but sweeps out a family of points in the RKHS, each
indexed by a fixed value of x. Formally, the embedding of a conditional distribution is represented
as an operator CY |X , which satisfies the following properties:
1. ?Y |x = CY |X ?(x);
where G is the RKHS corresponding to Y .
2. EY |x [g(Y )|x] = ?g, ?Y |x ?G ,
(1)
[29] found that such an operator exists under some assumptions, using the technique of crosscovariance operator CXY : G ? F :
CXY := EXY [?(X) ? ?(Y )] ? ?X ? ?Y ,
where ? is the tensor product. An interesting property is that CXY can also be viewed as an element
in the tensor product space G ? F. The result is summarized as follows.
Theorem 2 [29] Assuming that EY |X [g(Y )|X] ? F , the embedding of conditional distributions
?1
CY |X defined as CY |X := CY X CXX
satisfies properties 1 and 2.
Given a dataset DXY = {(xi , yi )}N
i=1 of size N drawn i.i.d. from P (X, Y ), we can estimate the
"Y |X = ?(K +?I)?1 ?? , where ? = (?(y1 ), ..., ?(yN )), ? =
conditional embedding operator as C
(?(x1 ), ..., ?(xN )), K = ?? ? and ? serves as regularization. The estimator is an element in the
tensor product space F ? G and satisfies properties 1 and 2 asymptotically. When the domain of X
?1
is finite, we can also estimate CXX
and CY X directly (See Appendix A.2.2 for more details).
3
Conditional Generative Moment-Matching Networks
We now present CGMMN, including a conditional maximum mean discrepancy criterion as the
training objective, a deep generative architecture and a learning algorithm.
3.1 Conditional Maximum Mean Discrepancy
Given conditional distributions PY |X and PZ|X , we aim to test whether they are the same in the
sense that when X = x is fixed whether PY |x = PZ|x holds or not. When the domain of X is finite,
a straightforward solution is to test whether PY |x = PZ|x for each x separately by using MMD.
However, this is impossible when X is continuous. Even in the finite case, as the separate tests do
not share statistics, we may need an extremely large number of training data to test a different model
for each single value of x. Below, we present a conditional maximum mean discrepancy criterion,
which avoids the above issues.
Recall the definition of kernel mean embedding of conditional distributions. When X = x is fixed,
we have the kernel mean embedding ?Y |x = CY |X ?(x). As a result, if we have CY |X = CZ|X ,
then ?Y |x = ?Z|x is also satisfied for every fixed x. By the virtue of Theorem 1, that PY |x = PZ|x
follows as the following theorem states.
Theorem 3 Assuming that F is a universal RKHS with an associated kernel k(?, ?),
EY |X [g(Y )|X] ? F, EZ|X [g(Z)|X] ? F and CY |X , CZ|X ? F ? G. If the embedding of
conditional distributions CY |X = CZ|X , then PY |X = PZ|X in the sense that for every fixed x, we
have PY |x = PZ|x .
3
The above theorem gives us a sufficient condition to guarantee that two conditional distributions are
the same. We use the operators to measure the difference of two conditional distributions and we
call it conditional maximum mean discrepancy (CMMD), which is defined as follows:
#
#2
L2CMMD = #CY |X ? CZ|X #F ?G .
s
d
M
Suppose we have two sample sets DXY
= {(xi , yi )}N
i=1 and DXY = {(xi , yi )}i=1 . Similar
as in MMD, in practice we compare the square difference between the empirical estimates of the
conditional embedding operators:
#
#2
# "d
"s #
L"2CMMD = #C
?C
,
#
Y |X
Y |X
F ?G
where the superscripts s and d denote the two sets of samples, respectively. For notation clarity, we
& = K + ?I. Then, using kernel tricks, we can compute the difference only in term of kernel
define K
gram matrices:
#
#
?1 ? #2
L"2CMMD = #?d (Kd + ?I)?1 ??
?s F ?G
d ? ?s (Ks + ?I)
'
(
'
(
'
(
(2)
& ?1 Ld K
& ?1 + Tr Ks K
& s?1 Ls K
& s?1 ? 2 ? Tr Ksd K
& ?1 Lds K
& s?1 ,
=Tr Kd K
d
d
d
d
where ?d := (?(y1d ), ..., ?(yN
)) and ?d := (?(xd1 ), ..., ?(xdN )) are implicitly formed feature mas
?
trices, ?s and ?s are defined similarly for dataset DXY
. Kd = ? ?
d ?d and Ks = ?s ?s are the
?
?
gram matrices for input variables, while Ld = ?d ?d and Ls = ?s ?s are the gram matrices for
?
output variables. Finally, Ksd = ??
s ?d and Lds = ?d ?s are the gram matrices between the two
datasets on input and out variables, respectively.
It is worth mentioning that we have assumed that the conditional mean embedding operator CY |X ?
F ? G to have the CMMD objective well-defined, which needs some smoothness assumptions such
?3/2
that CXX CXY is Hilbert-Schmidt [8]. In practice, the assumptions may not hold, however, the
empirical estimator ?(K + ?I)?1 ?? is always an element in the tensor product space which gives
as a well-justified approximation (i.e., the Hilbert-Schmidt norm exists) for practical use [29].
Remark 1 Taking a close look on the objectives of MMD and CMMD, we can find some interesting
connections. Suppose N = M . By omitting the constant scalar, the objective function of MMD can
be rewritten as
L"2MMD = Tr(Ld ? 1) + Tr(Ls ? 1) ? 2 ? Tr(Lds ? 1),
where 1 is the matrix with all entities equaling to 1. The objective function of CMMD can be
expressed as
L"2CMMD = Tr(Ld ? C1 ) + Tr(Ls ? C2 ) ? 2 ? Tr(Lds ? C3 ),
where C1 , C2 , C3 are some matrices based on the conditional variables x in both data sets. The
difference is that instead of putting uniform weights on the gram matrix as in MMD, CMMD applies
non-uniform weights, reflecting the influence of conditional variables. Similar observations have
been shown in [29] for the conditional mean operator, where the estimated conditional embedding
?Y |x is a non-uniform weighted combination of ?(xi ).
3.2 CGMMN Nets
We now present a conditional DGM and train it by the CMMD criterion. One desirable property of
the DGM is that we can easily draw samples from it to estimate the CMMD objective. Below, we
present such a network that takes both the given conditional variables and an extra set of random
variables as inputs, and then passes through a deep neural network with nonlinear transformations
to produce the samples of the target variables.
Specifically, our network is built on the fact that for any distribution P on sample space K and any
continuous distribution Q on L that are regular enough, there is a function G : L ? K such that
G(x) ? P, where x ? Q [12]. This fact has been recently explored by [3, 19] to define a deep
generative model and estimate the parameters by the MMD criterion. For a conditional model, we
would like the function G to depend on the given values of input variables. This can be fulfilled
via a process as illustrated in Fig. 1, where the inputs of a deep neural network (DNN) consist of
two parts ? the input variables x and an extra set of stochastic variables H ? Rd that follow
some continuous distribution. For simplicity, we put a uniform prior on each hidden unit p(h) =
d
)
U (hi ), where U (h) = I(0?h?1) is a uniform distribution on [0, 1] and I(?) is the indicator
i=1
4
function that equals to 1 if the predicate holds and 0 otherwise. After passing both x and h through
the DNN, we get a sample from the conditional distribution P (Y |x): y = f (x, h|w), where f
denotes the deterministic mapping function represented by the network with parameters w. By
& = (x, h) into the network. In this case, we have y =
default, we concatenate x and h and fill x
f (&
x|w).
Due to the flexibility and rich capability of DNN on fitting
nonlinear functions, this generative process can characterize various conditional distributions well. For example, a
simple network can consist of multiple layer perceptrons
(MLP) activated by some non-linear functions such as the
rectified linear unit (ReLu) [22]. Of course the hidden
layer is not restricted to MLP, as long as it supports gradient propagation. We also use convolutional neural networks (CNN) as hidden layers [25] in our experiments. It
is worth mentioning that there exist other ways to combine the conditional variables x with the auxiliary variables H. For example, we can add a corruption noise to Figure 1: An example architecture of
the conditional variables x to produce the input of the net- CGMMN networks.
& = x + h, where h may follow a Gaussian distribution N (0, ?I) in this case.
work, e.g., define x
With the above generative process, we can train the network by optimizing the CMMD objective
s
with proper regularization. Specifically, let DXY
= {(xdi , yid )}N
i=1 denote the given training dataset.
To estimate the CMMD objective, we draw a set of samples from the above generative model, where
the conditional variables can be set by sampling from the training set with/without small perturbation
(More details are in the experimental section). Thanks to its simplicity, the sampling procedure can
be easily performed. Precisely, we provide each x in the training dataset to the generator to get a
d
new sample and we denote DXY
= {(xsi , yis )}M
i=1 as the generated samples. Then, we can optimize
the CMMD objective in Eq. (2) by gradient descent. See more details in Appendix A.1.
Algorithm 1 Stochastic gradient descent for CGMMN
1: Input: Dataset D = {(xi , yi )}N
i=1
2: Output: Learned parameters w
3: Randomly divide training dataset D into mini batches
4: while Stopping criterion not met do
5:
Draw a minibatch B from D;
6:
For each x ? B, generate a y; and set B ? to contain all the generated (x, y);
!2
?L
CMMD
7:
Compute the gradient ?w
on B and B ? ;
8:
Update w using the gradient with proper regularizer.
9: end while
& s?1 and K
& ?1 in the CMMD objective are independent of the model
Note that the inverse matrices K
d
parameters, suggesting that we are not restricted to use differentiable kernels on the conditional
variables x. Since the computation cost for kernel gram matrix grows cubically with the sample
size, we present an mini-batch version algorithm in Alg. 1 and some discussions can be found in
Appendix A.2.1.
4
Experiments
We now present a diverse range of applications to evaluate our model, including predictive modeling, contextual generation and an interesting case of Bayesian dark knowledge [15]. Our results
demonstrate that CGMMN is competitive in all the tasks.
4.1 Predictive Performance
4.1.1 Results on MNIST dataset
We first present the prediction performance on the widely used MINIST dataset, which consists of
images in 10 classes. Each image is of size 28 ? 28 and the gray-scale is normalized to be in range
[0, 1]. The whole dataset is divided into 3 parts with 50, 000 training examples, 10, 000 validation
examples and 10, 000 testing examples.
For prediction task, the conditional variables are the images x ? [0, 1]28?28 , and the generated
sample is a class label, which is represented as a vector y ? R10
+ and each yi denotes the confidence
that x is in class i. We consider two types of architectures in CGMMN ? MLP and CNN.
5
We compare our model, denoted as CGMMN in the Table 1: Error rates (%) on MNIST dataset
MLP case and CGMMN-CNN in the CNN case, with
Model
Error Rate
Varitional Auto-encoder (VA) [14], which is an unsu- VA+Pegasos [18]
1.04
pervised DGM learnt by stochastic variational meth- MMVA [18]
0.90
ods. To use VA for classification, a subsequent clas- CGMMN
0.97
sifier is built ? We first learn feature representations
CVA + Pegasos [18]
1.35
by VA and then learn a linear SVM on these features
CGMMN-CNN
0.47
using Pegasos algorithm [26]. We also compare with
Stochastic Pooling [33]
0.47
max-margin DGMs (denoted as MMVA with MLP as
Network in Network [20]
0.47
hidden layers and CMMVA in the CNN case) [18],
Maxout Network [6]
0.45
which is a state-of-the-art DGM for prediction, and
CMMVA [18]
0.45
several other strong baselines, including Stochastic
DSN [17]
0.39
Pooling [33], Network in Network [20], Maxout Network [6] and Deeply-supervised nets (DSN) [17].
In the MLP case, the model architecture is shown in Fig. 1 with an uniform distribution for hidden
variables of dimension 5. Note that since we do not need much randomness for the prediction task,
this low-dimensional hidden space is sufficient. In fact, we did not observe much difference with a
higher dimension (e.g., 20 or 50), which simply makes the training slower. The MLP has 3 hidden
layers with hidden unit number (500, 200, 100) with the ReLu activation function. A minibatch size
of 500 is adopted. In the CNN case, we use the same architecture as [18], where there are 32 feature
maps in the first two convolutional layers and 64 feature maps in the last three hidden layers. An
MLP of 500 hidden units is adopted at the end of convolutional layers. The ReLu activation function
is used in the convoluational layers and sigmoid function in the last layer. We do not pre-train our
model and a minibatch size of 500 is adopted as well. The total number of parameters in the network
is comparable with the competitors [18, 17, 20, 6].
In both settings, we use AdaM [13] to optimize parameters. After training, we simply draw a sample
from our model conditioned on the input image and choose the index of maximum element of y as
its prediction.Table 1 shows the results. We can see that CGMMN-CNN is competitive with various
state-of-the-art competitors that do not use data augumentation or multiple model voting (e.g., CMMVA). DSN benefits from using more supervision signal in every hidden layer and outperforms the
other competitors.
Table 2: Error rates (%) on SVHN dataset
4.1.2 Results on SVHN dataset
Model
Error Rate
We then report the prediction performance on the Street
CVA+Pegasos [18]
25.3
View House Numbers (SVHN) dataset. SVHN is a
CGMMN-CNN
3.13
large dataset consisting of color images of size 32 ? 32
4.9
in 10 classes. The dataset consists of 598, 388 train- CNN [25]
CMMVA [18]
3.09
ing examples, 6, 000 validation examples and 26, 032
Stochastic Pooling [33]
2.80
testing examples. The task is significantly harder than
Network
in
Network
[20]
2.47
classifying hand-written digits. Following [25, 18], we
Maxout Network [6]
2.35
preprocess the data by Local Contrast Normalization
DSN
[17]
1.92
(LCN). The architecture of out network is similar to that
in MNIST and we only use CNN as middle layers here. A minibatch size of 300 is used and the
other settings are the same as the MNIST experiments.
Table 2 shows the results. Through there is a gap between our CGMMN and some discriminative
deep networks such as DSN, our results are comparable with those of CMMVA, which is the stateof-the-art DGM for prediction. CGMMN is compatible with various network architectures and we
are expected to get better results with more sophisticated structures.
4.2 Generative Performance
4.2.1 Results on MNIST dataset
We first test the generating performance on the widely used MNIST dataset. For generating task,
the conditional variables are the image labels. Since y takes a finite number of values, as mentioned
?1
in Sec. 2.3, we estimate CY X and CXX
directly and combine them as the estimation of CY |X (See
Appendix A.2.2 for practical details).
The architecture is the same as before but exchanging the position of x and y. For the input layer,
besides the label information y as conditional variables (represented by a one-hot-spot vector of
dimension 10), we further draw a sample from a uniform distribution of dimension 20, which is
6
(a) MNIST samples
(b) Random CGMMN samples (c) Samples conditioned on label 0
Figure 2: Samples in (a) are from MNIST dataset; (b) are generated randomly from our CGMMN
network; (c) are generated randomly from CGMMN with conditions on label y = 0. Both (b) and
(c) are generated after running 500 epoches.
sufficiently large. Overall, the network is a 5-layer MLP with input dimension 30 and the middle
layer hidden unit number (64, 256, 256, 512), and the output layer is of dimension 28 ? 28, which
represents the image in pixel. A minibatch of size 200 is adopted.
Fig. 2 shows some samples generated using our CGMMN, where in (b) the conditional variable y
is randomly chosen from the 10 possible values, and in (c) y is pre-fixed at class 0. As we can see,
when conditioned on label 0, almost all the generated samples are really in that class.
As in [19], we investigate whether the models learn
to merely copy the data. We visualize the nearest
neighbors in the MNIST dataset of several samples
generated by CGMMN in terms of Euclidean pixel- Figure 3: CGMMN samples and their nearest
wise distance [5] in Fig. 3. As we can see, by this neighbour in MNIST dataset. The first row is
our generated samples.
metric, the samples are not merely the copy.
As also discussed in [19], real-world data can be complicated and
high-dimensional and autoencoder can be good at representing
data in a code space that captures enough statistical information
to reliably reconstruct the data. For example, visual data, while
represented in a high dimension often exists on a low-dimensional
manifold. Thus it is beneficial to combine autoencoders with our
CGMMN models to generate more smooth images, in contrast
to Fig. 2 where there are some noise in the generated samples.
Precisely, we first learn an auto-encoder and produce code representations of the training data, then freeze the auto-encoder weights
and learn a CGMMN to minimize the CMMD objective between
the generated codes using our CGMMN and the training data codes. Figure 4: Samples generated
The generating results are shown in Fig. 4. Comparing to Fig. 2, by CGMMN+Autoencoder,
where the architecture follows
the samples are more clear.
from [19].
4.2.2 Results on Yale Face dataset
We now show the generating results on the Extended Yale Face dataset [9], which contains 2, 414
grayscale images for 38 individuals of dimension 32 ? 32. There are about 64 images per subject,
one per different facial expression or configuration. A smaller version of the dataset consists of 165
images of 15 individuals and the generating result can be found in Appendix A.4.2.
We adopt the same architecture as the first generating experiment for MNIST, which is a 5-layer MLP
with an input dimension of 50 (12 hidden variables and 38 dimensions for conditional variables, i.e.,
labels) and the middle layer hidden unit number (64, 256, 256, 512). A mini-batch size of 400 is
adopted. The other settings are the same as in the MNIST experiment. The overall generating
results are shown in Fig. 5, where we really generate diverse images for different individuals. Again,
as shown in Appendix A.4.1, the generated samples are not merely the copy of training data.
4.3 Distill Bayesian Models
Our final experiment is to apply CGMMN to distill knowledge from Bayesian models by learning a conditional distribution model for efficient prediction. Specifically, let ? denote the ran7
dom variables. A Bayesian model first computes the posterior distribution given the training set
D = {(xi , yi )}N
stage, given a new input x, a response sample y
i=1 as p(?|D). In the prediction
%
is generated via probability p(y|x, D) = p(y|x, ?)p(?|D)d?. This procedure usually involves a
complicated integral thus is time consuming. [15] show that we can learn a relatively simple student
network to distill knowledge from the teacher network (i.e., the Bayesian model) and approximately
represent the predictive distribution p(y|x, D) of the teacher network.
Our CGMMN provides a new solution to build such a student
network for Bayesian dark knowledge. To learn CGMMN,
we need two datasets to estimate the CMMD objective ? one
is generated by the teacher network and the other one is generated by CGMMN. The former sampled dataset serves as the
training dataset for our CGMMN and the latter one is generated during the training process of it. For high-dimensional
data, adopting the same strategy as [15], we sample ?near"
the training data to generate the former dataset (i.e., perturbing the inputs in the training set slightly before sending to the
teacher network to sample y).
Due to the space limitation, we test our model on a regression problem on the Boston housing dataset, which was also Figure 5: CGMMN generated samused in [15, 10], while deferring the other results on a syn- ples for Extended Yale Face Dataset.
thetic dataset to Appendix A.3. The dataset consists of 506 Columns are conditioned on differdata points where each data is of dimension 13. We first train ent individuals.
a PBP model [10], which is a scalable method for posterior
inference in Bayesian neural networks, as the teacher and then distill it using our CGMMN model.
We test whether the distilled model will degrade the prediction performance.
We distill the PBP model [10] using an Table 3: Distilling results on Boston Housing
MLP network with three hidden layers and dataset, the error is measured by RMSE
(100, 50, 50) hidden units for middle layers. We
PBP prediction Distilled by CGMMN
draw N = 3, 000 sample pairs {(xi , yi )}N
i=1
2.574 ? 0.089
2.580 ? 0.093
from the PBP network, where xi is the input
variables that serve as conditional variables in our model. For a fair comparison, xi is generated
by adding noise into training data to avoid fitting the testing data directly. We evaluate the prediction performance on the original testing data by root mean square error (RMSE). Table 3 shows the
results. We can see that the distilled model does not harm the prediction performance. It is worth
mentioning that we are not restricted to distill knowledge from PBP. In fact, any Bayesian models
can be distilled using CGMMN.
5
Conclusions and Discussions
We present conditional generative moment-matching networks (CGMMN), which is a flexible framework to represent conditional distributions. CGMMN largely extends the ability of previous DGM
based on maximum mean discrepancy (MMD) while keeping the training process simple as well,
which is done by back-propagation. Experimental results on various tasks, including predictive
modeling, data generation and Bayesian dark knowledge, demonstrate competitive performance.
Conditional modeling has been practiced as a natural step towards improving the discriminative
ability of a statistical model and/or relaxing unnecessary assumptions of the conditional variables.
For deep learning models, sum product networks (SPN) [24] provide exact inference on DGMs and
its conditional extension [4] improves the discriminative ability; and the recent work [21] presents
a conditional version of the generative adversarial networks (GAN) [5] with wider applicability.
Besides, the recent proposed conditional variational autoencoder [28] also works well on structured
prediction. Our work fills the research void to significantly improve the applicability of momentmatching networks.
Acknowledgments
The work was supported by the National Basic Research Program (973 Program) of China (No.
2013CB329403), National NSF of China Projects (Nos. 61620106010, 61322308, 61332007), the
Youth Top-notch Talent Support Program, and the Collaborative Projects with Tencent and Intel.
8
References
[1] E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a laplacian pyramid
of adversarial networks. NIPS, 2015.
[2] A. Dosovitskiy, J. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and
cars with convolutional networks. arXiv:1411.5928, 2015.
[3] G. Dziugaite, D. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean
discrepancy optimization. UAI, 2015.
[4] R. Gens and P. Domingos. Discriminative learning of sum-product networks. NIPS, 2012.
[5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adverisarial nets. NIPS, 2014.
[6] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. ICML, 2013.
[7] A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola. A kernel two-sample test. JMLR,
2008.
[8] S. Grunewalder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional mean
embedding as regressors. ICML, 2012.
[9] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang. Face recognition using laplacianfaces. IEEE Trans.
Pattern Anal. Mach. Intelligence, 27(3):328?340, 2005.
[10] J. Hernandez-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of bayesian
neural networks. ICML, 2015.
[11] T. Hofmann, B. Scholkopf, and A. Smola. Kernel methods in machine learning. The Annals of Statistics,
36(3):1171?1220, 2008.
[12] O. Kallenbery. Foundations of modern probability. New York: Springer, 2002.
[13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
[14] D. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014.
[15] A. Korattikara, V. Rathod, K. Murphy, and M. Welling. Bayesian dark knowledge. NIPS, 2015.
[16] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. ICML, 2001.
[17] C. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. AISTATS, 2015.
[18] C. Li, J. Zhu, T. Shi, and B. Zhang. Max-margin deep generative models. NIPS, 2015.
[19] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. ICML, 2015.
[20] M. Lin, Q. Chen, and S. Yan. Network in network. ICLR, 2014.
[21] M. Mirza and S. Osindero. Conditional generative adversarial nets. ArXiv:1411.1784v1, 2014.
[22] V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. ICML, 2010.
[23] A. Ng and M.I. Jordan. On discriminative vs. generative classifiers: a comparison of logistic regression
and naive bayes. NIPS, 2001.
[24] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. UAI, 2011.
[25] P. Sermanet, S. Chintala, and Y. Lecun. Convolutional neural networks applied to house numbers digit
classification. ICPR, 2012.
[26] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
svm. Mathmetical Programming, Series B, 2011.
[27] A. Smola, A. Gretton, L. Song, and B. Scholkopf. A hilbert space embedding for distributions. International Conference on Algorithmic Learning Theory, 2007.
[28] K. Sohn, X. Yan, and H. Lee. Learning structured output representation using deep conditional generative
models. NIPS, 2015.
[29] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions
with applications to dynamical systems. ICML, 2009.
[30] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep boltzmann machines. NIPS, 2012.
[31] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator.
arXiv:1411.4555v2, 2015.
[32] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual
attributes. arXiv:1512.00570, 2015.
[33] M. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks.
ICLR, 2013.
9
| 6255 |@word cnn:11 version:3 briefly:1 middle:4 norm:4 hu:1 tr:9 harder:1 tnlist:1 ld:4 moment:7 configuration:1 series:2 contains:1 practiced:1 rkhs:9 outperforms:1 contextual:4 od:1 luo:1 comparing:1 exy:1 universality:1 activation:2 written:1 concatenate:1 subsequent:1 hofmann:1 laplacianfaces:1 update:1 v:1 generative:30 fewer:1 intelligence:1 mccallum:1 provides:3 simpler:1 zhang:3 c2:2 direct:1 dsn:5 scholkopf:3 consists:4 fitting:2 combine:3 expected:1 inspired:1 salakhutdinov:1 solver:1 spain:1 begin:1 bounded:1 notation:1 project:2 kind:1 transformation:2 guarantee:1 every:3 voting:1 classifier:1 bio:1 unit:9 szlam:1 yn:2 arguably:1 segmenting:1 before:2 local:1 tsinghua:3 cva:2 mach:1 encoding:1 hernandez:1 approximately:1 china:3 k:3 challenging:1 relaxing:1 mentioning:3 range:5 trice:1 practical:2 acknowledgment:1 lecun:1 testing:4 practice:3 block:1 backpropagation:1 digit:2 procedure:2 spot:1 pontil:1 empirical:4 universal:3 yan:4 significantly:3 matching:7 confidence:1 pre:2 regular:1 get:3 pegasos:5 close:1 operator:8 put:1 context:1 impossible:1 influence:1 py:9 optimize:2 map:4 deterministic:1 center:1 missing:1 lobato:1 straightforward:1 shi:1 l:4 simplicity:3 pure:1 pouget:1 estimator:4 fill:2 embedding:24 annals:1 target:2 suppose:2 exact:1 programming:1 caption:1 domingo:2 goodfellow:2 trick:1 element:7 roy:1 recognition:2 observed:1 capture:1 calculate:1 cy:14 equaling:1 deeply:2 mentioned:2 warde:2 dom:1 depend:1 predictive:8 serve:1 patterson:1 multimodal:2 joint:3 easily:4 various:7 represented:5 regularizer:1 train:5 query:1 zemel:1 labeling:1 tell:1 shalev:1 whose:1 widely:3 solve:1 otherwise:1 reconstruct:1 encoder:3 ability:3 statistic:3 niyogi:1 superscript:1 final:1 housing:2 advantage:1 differentiable:1 sequence:1 net:6 product:8 tu:1 relevant:1 korattikara:1 gen:1 poon:1 flexibility:1 mmva:2 frobenius:1 ent:1 produce:3 generating:9 adam:3 object:1 wider:1 develop:1 measured:1 nearest:2 ex:3 eq:1 strong:1 implemented:1 auxiliary:1 involves:1 distilling:2 met:1 rasch:1 attribute:2 stochastic:9 require:1 really:2 preliminary:2 extension:1 hold:3 sufficiently:1 mapping:1 algorithmic:1 visualize:1 adopt:1 estimation:3 baldassarre:1 label:8 cmmva:5 vice:1 successfully:1 weighted:1 cotter:1 fukumizu:1 always:1 gaussian:1 aim:1 grunewalder:1 avoid:2 tech:2 contrast:2 adversarial:4 baseline:1 sense:2 inference:2 stopping:1 cubically:1 hidden:17 manipulating:1 dnn:3 interested:2 pixel:2 issue:1 among:1 flexible:2 classification:2 denoted:2 stateof:1 retaining:1 overall:2 art:3 brox:1 equal:1 field:1 distilled:4 ng:1 sampling:3 identical:1 represents:1 look:1 unsupervised:1 denton:1 icml:7 discrepancy:13 report:1 mirza:3 dosovitskiy:1 modern:1 randomly:4 neighbour:1 national:2 intell:1 crosscovariance:1 ksd:2 individual:4 murphy:1 thetic:1 consisting:1 y1d:1 mlp:11 investigate:1 farley:2 behind:1 activated:1 primal:1 integral:1 facial:1 indexed:1 ples:1 divide:1 euclidean:1 theoretical:1 formalism:2 modeling:7 column:1 exchanging:1 stacking:1 cost:1 applicability:2 distill:6 uniform:8 predicate:1 osindero:1 characterize:2 xdi:1 answer:1 teacher:5 learnt:1 thanks:2 density:1 borgwardt:1 international:1 probabilistic:4 lee:3 again:1 lever:1 satisfied:1 choose:1 huang:1 cb329403:1 li:3 suggesting:1 student:3 summarized:1 sec:1 performed:2 view:2 root:1 lab:2 sup:1 competitive:5 bayes:2 capability:2 complicated:2 rmse:2 collaborative:1 minimize:1 square:4 formed:1 convolutional:7 cxy:4 largely:2 efficiently:1 preprocess:1 lds:4 bayesian:15 ren:1 comp:1 worth:3 rectified:2 corruption:1 randomness:1 minist:1 definition:1 competitor:3 chintala:2 associated:3 sampled:1 dataset:31 recall:1 knowledge:12 color:1 improves:1 car:1 hilbert:8 syn:1 sophisticated:1 back:4 reflecting:1 higher:1 supervised:2 follow:2 xie:1 response:2 improved:1 modal:1 evaluated:1 done:1 stage:1 tatarchenko:1 smola:4 autoencoders:1 hand:1 dgm:6 nonlinear:3 propagation:5 minibatch:5 defines:3 logistic:1 gray:1 grows:1 building:2 omitting:1 dziugaite:1 contain:1 unbiased:1 normalized:1 former:2 regularization:3 illustrated:1 game:1 during:1 inferior:1 criterion:7 generalized:1 demonstrate:4 svhn:4 reasoning:1 image:18 variational:3 wise:1 recently:2 pbp:5 sigmoid:1 overview:1 perturbing:1 discussed:1 he:1 freeze:1 versa:1 smoothness:1 rd:2 talent:1 similarly:1 supervision:1 add:1 posterior:2 recent:4 perspective:1 optimizing:2 success:1 yi:10 ey:5 signal:1 multiple:3 desirable:1 gretton:3 smooth:2 ing:1 xdn:1 youth:1 cross:1 long:3 lin:1 divided:1 laplacian:2 va:4 prediction:14 scalable:2 basic:2 xsi:1 regression:2 expectation:1 metric:2 arxiv:4 kernel:25 adopting:1 represent:3 mmd:21 pyramid:2 cz:4 c1:2 justified:1 sifier:1 normalization:1 separately:1 void:1 extra:2 unlike:1 lcn:1 pass:2 pooling:4 subject:1 lafferty:1 jordan:1 call:1 near:1 yang:1 bengio:3 enough:3 embeddings:3 relu:3 architecture:11 inner:1 idea:1 cn:2 whether:6 expression:1 notch:1 song:2 passing:1 york:1 remark:1 deep:19 clear:1 dark:7 sohn:2 generate:7 exist:1 nsf:1 estimated:2 fulfilled:1 per:2 diverse:2 dgms:4 key:1 putting:1 distills:1 imputation:1 drawn:1 clarity:1 r10:1 v1:1 graph:1 asymptotically:2 merely:3 sum:3 beijing:1 inverse:1 uncertainty:1 springenberg:1 swersky:1 extends:2 family:1 almost:1 draw:9 cxx:4 appendix:7 comparable:2 layer:20 hi:1 followed:1 guaranteed:1 distinguish:1 yale:3 courville:2 hilbertschmidt:1 precisely:2 gmmn:5 yong:1 toshev:1 min:2 extremely:1 chair:1 relatively:2 px:3 structured:3 icpr:1 ball:1 combination:1 kd:3 beneficial:1 smaller:1 slightly:1 deferring:1 restricted:5 singer:1 serf:2 end:2 sending:1 adopted:5 rewritten:1 apply:1 observe:1 v2:1 frequentist:1 schmidt:2 batch:3 slower:1 original:1 denotes:2 running:1 top:1 gan:5 zeiler:1 spn:1 ghahramani:1 build:1 sweep:1 objective:17 tensor:4 parametric:2 strategy:1 gradient:9 dp:2 iclr:4 distance:1 separate:1 sci:1 entity:1 street:1 degrade:1 evaluate:4 mail:1 manifold:1 ozair:1 assuming:2 besides:3 code:4 index:1 mini:3 providing:1 sermanet:1 ba:1 anal:1 reliably:1 proper:2 boltzmann:2 cmmd:20 observation:2 datasets:3 finite:4 descent:3 extended:3 hinton:1 y1:1 perturbation:1 reproducing:2 pair:1 c3:2 connection:1 momentmatching:2 learned:1 barcelona:1 kingma:2 nip:9 trans:1 address:1 usually:3 below:2 pattern:1 dynamical:1 summarize:1 program:3 built:2 including:9 max:5 power:1 hot:1 natural:3 indicator:1 yid:1 zhu:2 meth:1 representing:1 improve:2 xd1:1 jun:1 auto:4 autoencoder:3 naive:1 text:1 review:1 prior:1 epoch:1 rathod:1 generation:8 interesting:5 limitation:1 srebro:1 generator:2 validation:2 foundation:1 sufficient:2 classifying:1 share:1 row:1 course:1 compatible:1 supported:1 last:2 keeping:2 copy:3 wide:3 neighbor:1 characterizing:1 taking:2 face:4 benefit:1 dimension:12 calculated:1 xn:1 avoids:1 rich:2 gram:6 default:1 adopts:2 author:1 world:1 computes:1 regressors:1 attribute2image:1 employing:1 erhan:1 welling:2 compact:1 implicitly:1 uai:2 harm:1 unnecessary:3 assumed:2 consuming:1 discriminative:6 xi:11 fergus:2 grayscale:1 shwartz:1 continuous:6 table:7 learn:12 expanding:1 tencent:1 improving:1 alg:1 domain:2 did:1 aistats:1 dense:1 multilayered:1 whole:1 noise:3 fair:1 x1:1 xu:1 fig:8 intel:1 sub:1 position:1 pereira:1 house:2 jmlr:1 extractor:1 theorem:7 embed:1 explored:1 pz:6 admits:1 svm:2 virtue:1 abadie:1 essential:1 exists:3 consist:2 mnist:12 adding:1 gallagher:1 conditioned:4 margin:3 gap:1 chen:1 boston:2 simply:2 clas:1 ez:1 visual:3 vinyals:1 expressed:3 scalar:1 applies:1 springer:1 satisfies:4 dcszj:1 ma:1 nair:1 conditional:66 viewed:1 towards:1 maxout:4 hard:1 specifically:4 infinite:1 total:1 experimental:2 perceptrons:1 formally:2 support:2 latter:1 dxy:6 dept:1 srivastava:1 |
5,809 | 6,256 | A Credit Assignment Compiler for Joint Prediction
Kai-Wei Chang
University of Virginia
[email protected]
He He
University of Maryland
[email protected]
John Langford
Microsoft Research
[email protected]
Hal Daum? III
University of Maryland
[email protected]
Stephane Ross
Google
[email protected]
Abstract
Many machine learning applications involve jointly predicting multiple mutually
dependent output variables. Learning to search is a family of methods where the
complex decision problem is cast into a sequence of decisions via a search space.
Although these methods have shown promise both in theory and in practice, implementing them has been burdensomely awkward. In this paper, we show the
search space can be defined by an arbitrary imperative program, turning learning
to search into a credit assignment compiler. Altogether with the algorithmic improvements for the compiler, we radically reduce the complexity of programming
and the running time. We demonstrate the feasibility of our approach on multiple joint prediction tasks. In all cases, we obtain accuracies as high as alternative
approaches, at drastically reduced execution and programming time.
1
Introduction
Many applications require a predictor to make coherent decisions. As an example, consider recognizing a handwritten word where each character might be recognized in turn to understand the word.
Here, it is commonly observed that exposing information from related predictions (i.e. adjacent
letters) aids individual predictions. Furthermore, optimizing a joint loss function can improve the
gracefulness of error recovery. Despite these advantages, it is empirically common to build independent predictors, in settings where joint prediction naturally applies, because they are simpler to
implement and faster to run. Can we make joint prediction algorithms as easy and fast to program
and compute while maintaining their theoretical benefits?
Methods making a sequence of sub-decisions have been proposed for handling complex joint predictions in a variety of applications, including sequence tagging [30], dependency parsing (known as
transition-based method) [35], machine translation [18], and co-reference resolution [44]. Recently,
general search-based joint prediction approaches (e.g., [10, 12, 14, 22, 41]) have been investigated.
The key issue of these search-based approaches is credit assignment: when something goes wrong
do you blame the first, second, or third prediction? Existing methods often take two strategies:
? The system ignores the possibility that a previous prediction may have been wrong, different costs have different errors, or the difference between train-time and test-time prediction.
? The system uses handcrafted credit assignment heuristics to cope with errors that the underlying algorithm makes and the long-term outcomes of decisions.
Both approaches may lead to statistical inconsistency: when features are not rich enough for perfect
prediction, the machine learning may converge sub-optimally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Algorithm 1 M Y RUN(X) % for sequence tagging, X: input sequence, Y: output
A sample user-defined function, where PREDICT and LOSS are library functions (see text). The credit
assignment compiler translates the code and data into model updates. More examples are in appendices.
1: Y ? []
2: for t = 1 to LEN (X) do
3:
ref ? X[t].true_label
4:
Y[t] ? PREDICT(x=examples[t], y=ref , tag=t, condition=[1:t-1])
5: LOSS (number of Y[t] 6= X[t].true_label)
6: return Y
In contrast, learning to search approaches [5, 11, 40] automatically handle the credit assignment
problem by decomposing the production of the joint output in terms of an explicit search space
(states, actions, etc.); and learning a control policy that takes actions in this search space. These have
formal correctness guarantees which differ qualitatively from models such as Conditional Random
Fields [28] and structured SVMs [46, 47]. Despite the good properties, none of these methods have
been widely adopted because the specification of a search space as a finite state machine is awkward
and naive implementations do not fully demonstrate the ability of these methods.
In this paper, we cast learning to search into a credit assignment compiler with a new programming
abstraction for representing a search space. Together with several algorithmic improvements, this
radically reduces both the complexity of programming1 and the running time. The programming
interface has the following advantages:
? The same decoding function (see Alg. 1 for example) is used for training and prediction
so a developer need only code desired test time behavior and gets training ?for free?. This
simple implementation prevents common train/test asynchrony bugs.
? The compiler automatically ensures the model learns to avoid compounding errors and
makes a sequence of coherent decisions.
? The library functions are in a reduction stack so as base classifiers and learning to search
approaches improve, so does joint prediction performance.
We implement the credit assignment compiler in Vowpal-Wabbit (http://hunch.net/~vw/),
a fast online learning library, and show that the credit assignment compiler achieves outstanding empirical performance both in accuracy and in speed for several application tasks. This provides strong
simple baselines for future research and demonstrates the compiler approach to solving complex
prediction problems may be of broad interest. Details experimental settings are in appendices.
2
Programmable Learning to Search
We first describe the proposed programmable joint prediction paradigm. Algorithm 1 shows sample
code for a part of speech tagger (or generic sequence labeler) under Hamming loss. The algorithm
takes as input a sequence of examples (e.g., words), and predicts the meaning of each element in
turn. The ith prediction depends on previous predictions.2 It uses two underlying library functions,
PREDICT (...) and LOSS (...). The function PREDICT (...) returns individual predictions based on x
while LOSS(...) allows the declaration of an arbitrary loss for the point set of predictions. The
LOSS(...) function and the reference y inputted to PREDICT (...) are only used in the training phase
and it has no effect in the test phase. Surprisingly, this single library interface is sufficient for both
testing and training, when augmented to include label ?advice? from a training set as a reference
decision (by the parameter y). This means that a developer only has to specify the desired test time
behavior and gets training with minor additional decoration. The underlying system works as a
credit assignment compiler to translate the user-specified decoding function and labeled data into
updates of the learning model.
How can you learn a good PREDICT function given just an imperative program like Algorithm 1?
In the following, we show that it is essential to run the M Y RUN(...) function (e.g., Algorithm 1)
many times, ?trying out? different versions of PREDICT(...) to learn one that yields low LOSS(...).
We begin with formal definitions of joint prediction and a search space.
1
2
With library supports, developing a new task often requires only a few lines of code.
In this example, we use the library?s support for generating implicit features based on previous predictions.
2
S
R
rollin
one-step
deviations
E
E
loss=0
E
loss=.2
loss=.8
rollout
The system begins at the start state S and chooses the middle
action twice according to the rollin policy. At state R it considers
both the chosen action (middle) and one-step deviations from that
action (top and bottom). Each of these deviations is completed
using the rollout policy until reaching an end state, at which point
the loss is collected. Here, we learn that deviating to the top
action (instead of middle) at state R decreases the loss by 0.2.
Figure 1: A search space implicitly defined by an imperative program.
The definition of a TDOLR program:
? Always terminate.
? Takes as input any relevant feature information X.
? Make zero or more calls to an oracle O : X 0 ? Y
which provides a discrete outcome.
? Report a loss L on termination.
Algorithm 2 TDOLR(X)
1: s ? a
2: while s 6? E do
3:
Compute xs from X and s
4:
s ? O(xs )
5: return L OSS (s)
Figure 2: Left: the definition; right: A TDOLR program simulates the search space.
Joint Prediction. Joint prediction aims to induce a function f such that for any X ? X (the input
space), f produces an output f (X) = Y ? Y(X) in a (possibly input-dependent) space Y(X). The
output Y often can be decomposed into smaller pieces (e.g., y1 , y2 , . . .), which are tied together by
features, by a loss function and/or by statistical dependence. There is a task-specific loss function
` : Y ? Y ? R?0 , where `(Y ? , Y? ) tells us how bad it is to predict Y? when the true is Y ? .
Search Space. In our framework, the joint variable Y? is produced incrementally by traversing a
search space, which is defined by states s ? S and a mapping A : S ? 2S defining the set of valid
next states.3 One of the states is a unique start state S while some of the others are end states e ? E.
Each end state corresponds to some output variable Ye . The goal of learning is finding a function
f : Xs ? S that uses the features of an input state (xs ) to choose the next state so as to minimize
the loss `(Y ? , Ye ) on a holdout test set.4 Follow reinforcement learning terminology, we call the
function a policy and call the learned function f a learned policy ?f .
Turning Search Space into an Imperative Program Surprisingly, search space can be represented by a class of imperative program, called Terminal Discrete Oracle Loss Reporting (TDOLR)
programs. The formal definition of TDOLR is listed in Figure 2. Without loss of generality, we
assume the number of choices is fixed in a search space, and the following theorem holds:
Theorem 1. For every TDOLR program, there exist an equivalent search space and for every search
space, there exists an equivalent TDOLR program.
Proof. A search space is defined by (A, E, S, l). We show there is a TDOLR program which can
simulate the search space in algorithm 2. This algorithm does a straightforward execution of the
search space, followed by reporting of the loss on termination. This completes the second claim.
For the first claim, we need to define, (A, E, S, l) given a TDOLR program such that the search
space can simulate the TDOLR program. At any point in the execution of TDOLR, we define an
equivalent state s = (O(X1 ), ..., O(Xn )) where n is the number of calls to the oracle. We define a
as the sequence of zero length, and we define E as the set of states after which TDOLR terminates.
3
Comprehensive strategies for defining search space have been discussed [14]. The theoretical properties
do not depend on which search space definition is used.
4
Note that we use X and Y to represent joint input and output and use x and y to represent input and output
to function f and PREDICT.
3
Algorithm 3 L EARN(X, F)
1: T, ex, cache ? 0, [], []
2: define P REDICT (x, y) := { T++ ; ex[T-1] ? x; cache[T-1] ? F (x, y, rollin) ; return cache[T-1] }
3: define L OSS (l) := no-op
4: M Y RUN (X) % M Y RUN (X) is a user-defined TDOLR program (e.g., Algorithm 1).
5: for t0 = 1 to T do
6:
losses, t ? h0, 0, . . . , 0i, 0
7:
for a0 = 1 to A(ex[t0 ]) do
?
? cache[t-1]
if t < t0
a0
if t = t0 }
8:
Define P REDICT(x, y) := { t++ ; return
? F(x,y,rollout) if t > t
0
9:
Define L OSS(val) := { losses[a0 ] += val }
10:
M Y RUN(X)
11:
Online update with cost-sensitive example (ex[t0 ], losses)
For each s ? E we define l(s) as the loss reported on termination. This search space manifestly
outputs the same loss as the TDOLR program.
The practical implication of this theorem is that instead of specifying search spaces, we can specify
a TDOLR program (e.g., Algorithm 1), reducing the programming complexity of joint prediction.
3
Credit Assignment Compiler for Training Joint Predictor
Now, we show how a credit assignment compiler turns a TDOLR program and training data into
model updates. In the training phase, the supervised signals are used in two places: 1) to define
the loss function, and 2) to construct a reference policy ? ? . The reference policy returns at any
prediction point a ?suggestion? as to a good next state.5 The general strategy is, for some number of
epochs, and for each example (X, Y ) in the training data, to do the following:
1. Execute M Y RUN(...) on X with a rollin policy to obtain a trajectory of actions ~a and loss `0
2. Many times:
(a) For some (or for all) time step t ? |~a|
(b) For some (or for all) alternative action a0t 6= at (at is the action taken by ~a in time step t)
(c) Execute M Y RUN(...) on X, with PREDICT returning a1:t?1 initially, then a0t , then acting
according to a rollout policy to obtain a new loss `t,a0t
(d) Compare the overall losses `t,at and `t,a0t to construct a classification/regression example
that demonstrates how much better or worse a0t is than at in this context.
3. Update the learned policy
The rollin and rollout policies can be the reference ? ? , the current classifier ?f or a mixture between
them. By varying them and the manner in which classification/regression examples are created, this
general framework can mimic algorithms like S EARN [11], DAGGER [41], AGGRE VAT E [40], and
LOLS [5].6
The full learning algorithm (for a single joint input X) is depicted in Algorithm 3.7 In lines 1?4, a
rollin pass of M Y RUN is executed. M Y RUN can generally be any TDOLR program as discussed
(e.g., Alg. 1). In this pass, predictions are made according to the current policy, F, flagged as rollin
(this is to enable support of arbitrary rollin and rollout policies). Furthermore, the examples (feature
vectors) encountered during prediction are stored in ex, indexed by their position in the sequence
(T), and the rollin predictions are cached in the variable cache (see Sec. 4).
The algorithm then initiates one-step deviations from this rollin trajectory. For every time step,
(line 5), we generate a single cost-sensitive classification example; its features are ex[t0 ], and there
5
Some papers assume the reference policy is optimal. An optimal policy always chooses the best next state
assuming it gets to make all future decisions as well.
6
E.g., rollin in LOLS is ?f and rollout is a stochastic interpolation of ?f and oracle ? ? constructed by y.
7
This algorithm is awkward because standard computational systems have a single stack. We have elected
to give M Y RUN control of the stack to ease the implementation of joint prediction tasks. Consequently, the
learning algorithm does not have access to the machine stack and must be implemented as a state machine.
4
are A(ex[t0 ]) possible labels (=actions). For each action (line 7), we compute the cost of that action by executing M Y RUN again (line 10) with a ?tweaked? P REDICT which returns the cached
predictions at steps before t0 , returns the perturbed action a0 at t0 , and at future timesteps calls F
for rollouts. The L OSS function accumulates the loss for the query action. Finally, a cost-sensitive
classification example is generated (line 11) and fed into an online learning algorithm.
4
Optimizing the Credit Assignment Compiler
We present two algorithmic improvements which make training orders of magnitude faster.
Optimization 1: Memoization The primary computational cost of Alg. 3 is making predictions:
namely, calling the underlying classifier in Step 10. In order to avoid redundant predictions, we
cache previous predictions. The challenge is understanding how to know when two predictions are
going to be identical, faster than actually computing the prediction. To accomplish this, the user
may decorate calls to the P REDICT function with tags. For a graphical model, a tag is effectively
the ?name? of a particular variable in the graphical model. For a sequence labeling problem, the
tag for a given position might just be its index. When calling P REDICT, the user specifies both the
tag of the current prediction and the tag of all previous predictions on which the current prediction
depends. The user is guaranteeing that if the predictions for all the tags in the dependent variables
are the same, then the prediction for the current example are the same.
Under this assumption, we store a cache that maps triples of htag, condition tags, condition
predictionsi to hcurrent predictioni. The added overhead of maintaining this data structure is tiny
in comparison to making repeated predictions on the same features. In line 11 the learned policy changes making correctness subtle. For data mixing algorithms (like DAgger), this potentially
changes Fi implying the memoized predictions may no longer be up-to-date. Thus this optimization
is okay if the policy does not change much. We evaluate this empirically in Section 5.3.
Optimization 2: Forced Path Collapse The second optimization we can use is a heuristic that
only makes rollout predictions for a constant number of steps (e.g., 2 or 4). The intuition is that
optimizing against a truly long term reward may be impossible if features are not available at the
current time t0 which enable the underlying learner to distinguish between the outcome of decisions
far in the future. The optimization stops rollouts after some fixed number of rollout steps.
This intuitive reasoning is correct, except for accumulating LOSS(...). If LOSS(...) is only declared at
the end of M Y RUN, then we must execute T ?t0 time steps making (possibly memoized) predictions.
However, for many problems, it is possible to declare loss early as with Hamming loss (= number
of incorrect predictions). There is no need to wait until the end of the sequence to declare a persequence loss: one can declare it after every prediction, and have the total loss accumulate (hence
the ?+=? on line 9). We generalize this notion slightly to that of a history-independent loss:
Definition 1 (History-independent loss). A loss function is history-independent at state s0 if, for any
final state e reachable from s0 , and for any sequence s0 s1 s2 . . . si = e: it holds that L OSS(e) =
A(s0 ) + B(s1 s2 . . . si ), where B does not depend on any state before s1 .
For example, Hamming loss is history-independent: A(s0 ) corresponds to loss through s0 and
B(s1 . . . si ) is the loss after s0 .8 When the loss function being optimized is history-independent, we
allow LOSS(...) to be declared early for this optimization. In addition, for tasks like transition-base
dependency parsing, although LOSS(...) is not decomposable over actions, expected cost per action
can be directly computed based on gold labels [19] so the array losses can be directly specified.
Speed Up We analyze the time complexity of the sequence tagging task. Suppose that the cost of
calling the policy is d and each state has k actions.9 Without any speed enhancements, each execution of M Y RUN takes O(T ) time, and we execute it T k + 1 times, yielding an overall complexity
of O(kT 2 d) per joint example. For comparison, structured SVMs or CRFs with first order Markov
8
Any loss function that decomposes over the structure, as required by structured SVMs, is guaranteed to
also be history-independent; the reverse is not true. Furthermore, when structured SVMs are run with a nondecomposable loss function, their runtime becomes exponential in t. When our approach is used with a loss
function that?s not history-independent, our runtime increases by a factor of t.
9
Because the policy is a multiclass classifier, d might hide a factor of k or log k.
5
Figure 3: Training time (minutes) versus test accuracy for POS and NER. Different points correspond to different termination criteria for training. The rightmost figure use default hyperparameters and the two left figures use hyperparameters that were tuned (for accuracy) on the holdout data.
Results of NER with default parameters are in the appendix. X-axis is in log scale.
dependencies run in O(k 2 T ) time. When both memoization and forced path collapse are in effect,
the complexity of training drops to O(T kd), similar to independent prediction. In particular, if the
ith prediction only depends on the i?1th prediction, then at most T k unique predictions are made.10
5
System Performance
We present two sets of experiments. In the first set, we compare the credit assignment compiler
with existing libraries on two sequence tagging problems: Part of Speech tagging (POS) on the Wall
Street Journal portion of the Penn Treebank; and sequence chunking problem: named entity recognition (NER) based on standard Begin-In-Out encoding on the CoNLL 2003 dataset. In the second set
of experiments, we demonstrate a simple dependency parser built by our approach achieves strong
results when comparing with systems with similar complexity. The parser is evaluated on the standard WSJ (English, Stanford-style labels), CTB (Chinese) datasets and the CoNLL-X datasets for
10 other languages.11 Our approach is implemented using the Vowpal Wabbit [29] toolkit on top of
a cost-sensitive classifier [3] trained with online updates [15, 24, 42]. Details of dataset statistics,
experimental settings, additional results on other applications, and pseudocode are in the appendix.
5.1
Sequence Tagging Tasks
We compare our system with freely available systems, including CRF++ [27], CRF SGD [4], Structured Perceptron [9], Structured SVM [23], Structured SVM (DEMI-DCD) [6], and an unstructured
baseline (OAA) predicting each label independently, using one-against-all classification [3]12 .
For each system, we consider two situations, either the default hyperparameters or the tuned
hyperparameters that achieved the best performance on holdout data. We report both conditions
to give a sense of how sensitive each approach is to the setting of hyperparameters (the amount of
hyperparameter tuning directly affects effective training time). We use the built-in feature template
of CRF++ to generate features and use them for other systems. The templates included neighboring
words and, in the case of NER, neighboring POS tags. The CRF++ templates generate 630k unique
features for the training data. However, because L2S is also able to generate features from its own
templates, we also provide results for L2S (ft) in which it uses its own feature template generation.
Training time. In Figure 3, we show trade-offs between training time (x-axis, log scaled) and
prediction accuracy (y-axis) for the aforementioned six systems. For POS tagging, the independent
classifier is the fastest (trains in less than one minute) but its performance peaks at 95% accuracy.
Three other approaches are in roughly the same time/accuracy trade-off: L2s, L2S (ft) and Structured
Perceptron. CRF SGD takes about twice as long. DEMI-DCD (taking a half hour) and CRF++ (taking
10
We use tied randomness [34] to ensure that for any time step, the same policy is called.
PTB and CTB are prepared by following [8], and CoNLL-X is from the CoNLL shared task 06.
12
Structured Perceptron and Structured SVM (DEMI-DCD) are implemented in Illioins-SL[7]. DEMIDCD is a multi-core dual approach, while Structured SVM uses cutting-planes.
11
6
B U C H C Z+ DA D U+ JA+ P O+ S L+ S W
PTB CTB
DYNA
S NN
L2S
75.3 89.8 88.7 81.5 87.9 74.2 92.1 88.9 78.5 88.9
67.4? 88.1 87.3 78.2 83.0 75.3 89.5 83.2? 63.6? 85.7
78.2 92.0 89.8 84.8 89.8 79.2 91.8 90.6 82.2 89.7
90.3 80.0
91.8# 83.9#
91.9 85.1
B EST
79.3 92.0 93.2 87.30 90.6 83.6 93.2 91.4 83.2 89.5
94.4# 87.2#
Parser
AR
Table 1: UAS on PTB, CTB and CoNLL-X. Best: the best known result in CoNLL-X or the best
published results (CTB, PTB) using arbitrary features and resources. See details and additional
results in text and in the appendix.15
over five hours) are not competitive. Structured SVM runs out of memory before achieving competitive performance (likely due to too many constraints). For NER the story is a bit different. The
independent classifiers are not competitive. Here, the two variants of L2S totally dominate. In this
case, Structured Perceptron is no longer competitive and is essentially dominated by CRF SGD. The
only system coming close to L2S?s performance is DEMI-DCD, although it?s performance flattens
out after a few minutes.13 The trends in the runs with default hyperparameters show similar behavior to those with tuned, though some of the competing approaches suffer significantly in prediction
performance. Structured Perceptron has no hyperparameters.
Test Time. In addition to training time, one might care about test time behavior. On NER, prediction times where 5.3k tokens/second (DEMI-DCD and Structured Perceptron, 20k (CRF SGD and
Structured SVM), 100k (CRF++), 220k (L2S (ft)), and 285k (L2S). Although CRF SGD and Structured Perceptron fared well in terms of training time, their test-time behavior is suboptimal. When
the number of labels increases from 9 (NER) to 45 (POS) the relative advantage of L2S increases
further. The speed of L2S is about halved while for others, it is cut down by as much as a factor of
8 due to the O(k) vs O(k 2 ) dependence on the label set size.
5.2
Dependency Parsing
To demonstrate how the credit assignment compiler handles predictions with complex dependencies,
we implement an arc-eager transition-based dependency parser [35]. At each state, it takes one of
the four actions {Shif t, Reduce, Lef t, Right} based on a simple neural network with one hidden
layer of size 5 and generates a dependency parse to a sentence in the end. The rollin policy is the
current (learned) policy. The probability of executing the reference policy (dynamic oracle) [19]
for rollout decreases over each round. We compare our model with two recent greedy transitionbased parsers implemented by the original authors, the dynamic oracle parser (DYNA) [19] and the
Stanford neural network parser (S NN) [8]. We also present the best results in CoNLL-X and the
best-published results for CTB and PTB. The performances are evaluated by unlabeled attachment
scores (UAS). Punctuation is excluded.
Table 1 shows the results. Our implementation with only ?300 lines of C++ code is competitive
with DYNA and S NN, which are specifically designed for parsing. Remarkably, our system achieves
strong performance on CoNLL-X without tuning any hyper-parameters, even beating heavily tuned
systems participating in the challenge on one dataset. The best system to date on PTB [2] uses a
global normalization, more complex neural network layers and k-best POS tags. Similarly, the best
system for CTB [16] uses stack LSTM architectures tailored for dependency parsing.
5.3
Empirical evaluation of optimizations
In Section 3, we discussed two approaches for computational improvements. Memoization avoids
re-predicting on the same input multiple times while path collapse stops rollouts at a particular
13
We also tried giving CRF SGD the features computed by L2S (ft) on both POS and NER. On POS, its
accuracy improved to 96.5 with essentially the same speed. On NER it?s performance decreased.
15 ?
( ) S NN makes assumptions about the structure of languages and hence obtains substantially worse performance on languages with multi-root trees. (+ ) Languages contains more than 1% non-projective arcs, where a
transition-based parser (e.g. L2S) likely underperforms graph-based parser (Best) due to the model assumptions. (# ) Numbers reported in the published papers [8, 16, 2].
7
No Opts
Mem.
Col.@4+Mem.
Col.@2+Mem.
NER
LOLS Searn
96s
123s
75s
85s
71s
75s
69s
71s
POS
LOLS Searn
3739s 4255s
1142s 1215s
1059s 1104s
1038s 1074s
Speedup factor for training
Effect of Caching on Training Efficiency
10
9
8
7
6
5
4
3
2
1
0
history=1
history=2
history=3
history=4
-7.5
-6.5
-5.5
-4.5
-3.5
Alpha (log10), controls mixing rate
Figure 4: The table on the left shows the effect of Collapse (Col) and Memorization (Mem.). The
figure on the right shows the speed-up obtained for different historical lengths and mixing rate of
rollout policy. Large ? corresponds to more prediction required when training the model.
point in time. The effect of the different optimizations depends greatly on the underlying learning
algorithm. For example, DAgger does not do rollouts at all, so no efficiency is gained by either
optimization.16 The affected algorithms are LOLS (with mixed rollouts) and Searn.
Figure 4 shows the effect of these optimizations on the best NER and POS systems we trained
without using external resources. In the left table, we can see that memoization alone reduces
overall training runtime by about 25% on NER and about 70% on POS, essentially because the
overhead for the classifier on POS tagging is so much higher (45 labels versus 9). When rollouts
are terminated early, the speed increases are much more modest, essentially because memoization
is already accounting for much of these gains. In all cases, the final performance of the predictors
is within statistical significance of each other (p-value of 0.95, paired sign test), except for Collapse@2+Memoization on NER, where the performance decrease is only insignificant at the 0.90
level. The right figure demonstrates that when ? increases, more prediction is required during the
training time, and the speedup increases from a factor of 1 (no change) to a factor of as much as 9.
However, as the history length increases, the speedup is more modest due to low cache hits.
6
Related Work
Several algorithms are similar to learning to search approaches, including the incremental structured
perceptron [10, 22], HC-Search [13, 14], and others [12, 38, 45, 48, 49]. Some fit this framework.
Probabilistic programming [21] has been an active area of research. These approaches have a different goal: Providing a flexible framework for specifying graphical models and performing inference
in those models. The credit assignment compiler instead allows a developer to learn to make coherent decisions for joint prediction (?learning to search?). We also differ by not designing a new
programming language. Instead, we have a two-function library which makes adoption and integration into existing code bases much easier.
The closest work to ours is Factorie [31]. Factorie is essentially an embedded language for writing factor graphs compiled into Scala to run efficiently.17 Similarly, Infer.NET [33], Markov Logic
Networks (MNLs) [39], and Probabilistic Soft Logic (PSL) [25] concisely construct and use probabilistic graphical models. BLOG [32] falls in the same category, though with a very different focus.
Similarly, Dyna [17] is a related declarative language for specifying probabilistic dynamic programs,
and Saul [26] is a declarative language embedded in Scala that deals with joint prediction via integer
linear programming. All of these examples have picked particular aspects of the probabilistic modeling framework to focus on. Beyond these examples, there are several approaches that essentially
?reinvent? an existing programming language to support probabilistic reasoning at the first order
level. IBAL [36] derives from O?Caml; Church [20] derives from LISP. IBAL uses a (highly optimized) form of variable elimination for inference that takes strong advantage of the structure of the
program; Church uses MCMC techniques, coupled with a different type of structural reasoning to
improve efficiency.
Acknowledgements Part of this work was carried out while Kai-Wei, Hal and Stephane were visiting Microsoft Research. Hal and He are also
supported by NSF grant IIS-1320538. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and
do not necessarily reflect the view of the sponsor. The authors thank anonymous reviewers for their comments.
16
17
Training speed is only degraded by about 0.5% with optimizations on, demonstrating negligible overhead.
Factorie-based implementations of simple tasks are still less efficient than systems like CRF SGD.
8
References
[1] A. Agarwal, O. Chapelle, M. Dud?k, and J. Langford. A reliable effective terascale linear learning system. arXiv preprint
arXiv:1110.4198, 2011.
[2] D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. Globally normalized transition-based
neural networks. Arxiv, 2016.
[3] A. Beygelzimer, V. Dani, T. Hayes, J. Langford, and B. Zadrozny. Error limiting reductions between classification tasks. In ICML, pages
49?56, 2005.
[4] L. Bottou. crfsgd project, 2011. http://leon.bottou.org/projects/sgd.
[5] K.-W. Chang, A. Krishnamurthy, A. Agarwal, H. Daum? III, and J. Langford. Learning to search better than your teacher. In ICML,
2015.
[6] K.-W. Chang, V. Srikumar, and D. Roth. Multi-core structural SVM training. In ECML, 2013.
[7] K.-W. Chang, S. Upadhyay, M.-W. Chang, V. Srikumar, and D. Roth. Illinoissl: A JAVA library for structured prediction. Arxiv, 2015.
[8] D. Chen and C. Manning. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740?750, 2014.
[9] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP,
2002.
[10] M. Collins and B. Roark. Incremental parsing with the perceptron algorithm. In ACL, 2004.
[11] H. Daum? III, J. Langford, and D. Marcu. Search-based structured prediction. Machine Learning Journal, 2009.
[12] H. Daum? III and D. Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In ICML,
2005.
[13] J. R. Doppa, A. Fern, and P. Tadepalli. Output space search for structured prediction. In ICML, 2012.
[14] J. R. Doppa, A. Fern, and P. Tadepalli. HC-Search: A learning framework for search-based structured prediction. JAIR, 50, 2014.
[15] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121?2159,
2011.
[16] C. Dyer, M. Ballesteros, W. Ling, A. Matthews, and N. A. Smith. Transition-based dependency parsing with stack long short-term
memory. In ACL, 2015.
[17] J. Eisner, E. Goldlust, and N. A. Smith. Compiling comp ling: Practical weighted dynamic programming and the dyna language. In
EMNLP, 2005.
[18] U. Germann, M. Jahr, K. Knight, D. Marcu, and K. Yamada. Fast decoding and optimal decoding for machine translation. Artificial
Intelligence, 154(1-2):127?143, 2003.
[19] Y. Goldberg and J. Nivre. Training deterministic parsers with non-deterministic oracles. Transactions of the ACL, 1, 2013.
[20] N. Goodman, V. Mansinghka, D. Roy, K. Bonawitz, and J. Tenenbaum. Church: a language for generative models. In UAI, 2008.
[21] A. D. Gordon, T. A. Henzinger, A. V. Nori, and S. K. Rajamani. Probabilistic programming. In International Conference on Software
Engineering (ICSE, FOSE track), 2014.
[22] L. Huang, S. Fayong, and Y. Guo. Structured perceptron with inexact search. In NAACL, 2012.
[23] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning Journal, 2009.
[24] N. Karampatziakis and J. Langford. Online importance weight aware updates. In UAI, 2011.
[25] A. Kimmig, S. Bach, M. Broecheler, B. Huang, and L. Getoor. A short introduction to probabilistic soft logic. In NIPS Workshop on
Probabilistic Programming, 2012.
[26] P. Kordjamshidi, D. Roth, and H. Wu. Saul: Towards declarative learning based programming. In IJCAI, 2015.
[27] T. Kudo. CRF++ project, 2005. http://crfpp.googlecode.com.
[28] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In
ICML, pages 282?289, 2001.
[29] J. Langford, A. Strehl, and L. Li. Vowpal wabbit, 2007. http://hunch.net/~vw.
[30] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In ICML,
2000.
[31] A. McCallum, K. Schultz, and S. Singh. FACTORIE: probabilistic programming via imperatively defined factor graphs. In NIPS, 2009.
[32] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: probabilistic models with unknown objects. Statistical
relational learning, 2007.
[33] T. Minka, J. Winn, J. Guiver, and D. Knowles. Infer .net 2.4, 2010. microsoft research cambridge, 2010.
[34] A. Ng and M. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In UAI, pages 406?415, 2000.
[35] J. Nivre. An efficient algorithm for projective dependency parsing. In IWPT, pages 149?160, 2003.
[36] A. Pfeffer. Ibal: A probabilistic rational programming language. In IJCAI, 2001.
[37] L. Ratinov and D. Roth. Design challenges and misconceptions in named entity recognition. In CoNLL, 2009.
[38] N. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitation learning. In NIPS, 2007.
[39] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62(1-2), 2006.
[40] S. Ross and J. A. Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv:1406.5979, 2014.
[41] S. Ross, G. J. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In
AI-Stats, 2011.
[42] S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In UAI, 2013.
[43] D. Roth and S. W. Yih. Global inference for entity and relation identification via a linear programming formulation. In Introduction to
Statistical Relational Learning. MIT Press, 2007.
[44] W. M. Soon, H. T. Ng, and D. C. Y. Lim. A machine learning approach to coreference resolution of noun phrases. Computational
Linguistics, 27(4):521 ? 544, 2001.
[45] U. Syed and R. E. Schapire. A reduction from apprenticeship learning to classification. In NIPS, 2011.
[46] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
[47] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output
spaces. In ICML, 2004.
[48] Y. Xu and A. Fern. On learning linear ranking functions for beam search. In ICML, pages 1047?1054, 2007.
[49] Y. Xu, A. Fern, and S. W. Yoon. Discriminative learning of beam-search heuristics for planning. In IJCAI, pages 2041?2046, 2007.
9
| 6256 |@word middle:3 version:1 tadepalli:2 termination:4 tried:1 accounting:1 sgd:8 yih:1 reduction:4 contains:1 score:1 tuned:4 ours:1 rightmost:1 existing:4 bradley:1 current:7 com:3 comparing:1 beygelzimer:1 si:3 must:2 parsing:8 john:1 exposing:1 hofmann:1 drop:1 designed:1 update:7 v:1 implying:1 half:1 greedy:1 alone:1 intelligence:1 generative:1 plane:2 mccallum:3 ith:2 smith:2 core:2 short:2 yamada:1 provides:2 boosting:1 org:1 simpler:1 tagger:1 five:1 rollout:11 constructed:1 incorrect:1 freitag:1 overhead:3 manner:1 apprenticeship:1 tagging:8 expected:1 roughly:1 os:5 planning:1 behavior:5 multi:3 fared:1 terminal:1 ptb:6 globally:1 decomposed:1 automatically:2 cache:8 totally:1 becomes:1 spain:1 begin:3 underlying:6 tweaked:1 project:3 substantially:1 developer:3 finding:2 guarantee:1 every:4 interactive:1 runtime:3 returning:1 wrong:2 classifier:8 demonstrates:3 control:3 scaled:1 penn:1 hit:1 grant:1 segmenting:1 before:3 declare:3 ner:13 negligible:1 engineering:1 despite:2 accumulates:1 encoding:1 path:3 interpolation:1 might:4 acl:3 twice:2 specifying:3 co:1 ease:1 collapse:5 fastest:1 projective:2 adoption:1 unique:3 practical:2 testing:1 practice:1 kimmig:1 implement:3 regret:2 nondecomposable:1 lols:5 area:1 empirical:2 significantly:1 java:1 word:4 induce:1 wait:1 altun:1 get:3 close:1 unlabeled:1 tsochantaridis:1 context:1 impossible:1 memorization:1 writing:1 accumulating:1 milch:1 equivalent:3 map:1 reviewer:1 roth:5 vowpal:3 crfs:1 go:1 straightforward:1 deterministic:2 independently:1 guiver:1 resolution:2 decomposable:1 recovery:1 unstructured:1 stats:1 array:1 dominate:1 handle:2 notion:1 krishnamurthy:1 limiting:1 suppose:1 parser:11 user:6 heavily:1 programming:16 us:9 designing:1 goldberg:1 domingo:1 hunch:2 element:1 manifestly:1 recognition:2 trend:1 roy:1 marcu:3 cut:1 predicts:1 labeled:1 srikumar:2 observed:1 bottom:1 ft:4 preprint:1 pfeffer:1 taskar:1 yoon:1 ensures:1 decrease:3 trade:2 knight:1 russell:1 intuition:1 complexity:7 reward:1 ong:1 dynamic:4 trained:2 depend:2 solving:1 singh:1 coreference:1 efficiency:3 learner:1 po:12 joint:22 represented:1 train:3 forced:2 fast:4 describe:1 effective:2 query:1 artificial:1 tell:1 labeling:2 hyper:1 outcome:3 h0:1 nori:1 heuristic:3 kai:2 widely:1 stanford:2 ability:1 statistic:1 richardson:1 jointly:1 final:2 online:8 sequence:18 advantage:4 ctb:7 net:5 wabbit:3 coming:1 neighboring:2 relevant:1 date:2 translate:1 mixing:3 gold:1 bug:1 intuitive:1 participating:1 ijcai:3 enhancement:1 produce:1 generating:1 perfect:1 cached:2 executing:2 guaranteeing:1 wsj:1 incremental:2 object:1 op:1 kolobov:1 minor:1 mansinghka:1 strong:4 implemented:4 c:1 decoration:1 differ:2 correct:1 stephane:2 stochastic:2 enable:2 opinion:1 elimination:1 implementing:1 require:1 ja:1 wall:1 anonymous:1 alberti:1 hold:2 credit:16 algorithmic:3 predict:10 mapping:1 claim:2 matthew:1 achieves:3 early:3 inputted:1 label:8 ross:4 sensitive:5 correctness:2 ganchev:1 weighted:1 compounding:1 mit:1 offs:1 uas:2 dani:1 always:2 aim:1 reaching:1 avoid:2 caching:1 varying:1 focus:2 joachim:2 improvement:4 karampatziakis:1 greatly:1 contrast:1 baseline:2 sense:1 andor:1 inference:3 dependent:3 abstraction:1 nn:4 a0:4 initially:1 hidden:2 relation:1 koller:1 hal3:1 going:1 issue:1 overall:3 classification:7 aforementioned:1 dual:1 flexible:1 noun:1 integration:1 field:2 construct:3 aware:1 extraction:1 ng:2 labeler:1 identical:1 kw:1 broad:1 yu:1 icml:8 future:4 mimic:1 report:2 others:3 gordon:2 few:2 okay:1 comprehensive:1 individual:2 deviating:1 phase:3 rollouts:6 microsoft:4 interest:1 possibility:1 highly:1 evaluation:1 mixture:1 truly:1 punctuation:1 yielding:1 implication:1 kt:1 l2s:13 accurate:1 traversing:1 modest:2 indexed:1 tree:1 desired:2 re:1 theoretical:2 soft:2 modeling:1 ar:1 assignment:16 phrase:1 cost:9 deviation:4 imperative:5 predictor:4 recognizing:1 virginia:1 too:1 optimally:1 reported:2 stored:1 dependency:12 eager:1 perturbed:1 teacher:1 accomplish:1 chooses:2 peak:1 lstm:1 international:1 probabilistic:13 off:1 decoding:4 together:2 earn:2 again:1 reflect:1 choose:1 possibly:2 emnlp:3 huang:2 severyn:1 iwpt:1 worse:2 external:1 style:1 return:8 li:1 sec:1 ranking:1 depends:4 piece:1 root:1 picked:1 view:1 analyze:1 hazan:1 compiler:16 start:2 len:1 dagger:3 portion:1 competitive:5 minimize:1 accuracy:8 degraded:1 efficiently:1 yield:1 correspond:1 generalize:1 handwritten:1 identification:1 produced:1 fern:4 none:1 trajectory:2 comp:1 pomdps:1 published:3 randomness:1 history:12 ibal:3 definition:6 decorate:1 against:2 petrov:1 inexact:1 minka:1 henzinger:1 naturally:1 proof:1 hamming:3 transitionbased:1 stop:2 gain:1 holdout:3 dataset:3 rational:1 lim:1 segmentation:1 subtle:1 actually:1 higher:1 jair:1 supervised:1 follow:1 factorie:4 awkward:3 wei:3 specify:2 improved:1 scala:2 execute:4 evaluated:2 though:2 generality:1 furthermore:3 just:2 implicit:1 formulation:1 langford:8 until:2 parse:1 google:2 incrementally:1 asynchrony:1 hal:3 name:2 effect:6 ye:2 normalized:2 y2:1 true:2 naacl:1 hence:2 excluded:1 dud:1 deal:1 adjacent:1 round:1 during:2 criterion:1 trying:1 crf:13 demonstrate:4 duchi:1 interface:2 reasoning:3 elected:1 meaning:1 recently:1 fi:1 common:2 pseudocode:1 empirically:2 handcrafted:1 discussed:3 he:3 accumulate:1 cambridge:1 ai:1 tuning:2 similarly:3 blame:1 language:12 reachable:1 chapelle:1 toolkit:1 specification:1 access:1 longer:2 compiled:1 etc:1 base:3 something:1 halved:1 own:2 hide:1 recent:1 closest:1 optimizing:3 reverse:1 store:1 blog:2 inconsistency:1 guestrin:1 additional:3 care:1 freely:1 recognized:1 converge:1 paradigm:1 redundant:1 signal:1 ii:1 multiple:3 full:1 reduces:2 infer:2 faster:3 kudo:1 bach:1 long:4 a1:1 feasibility:1 vat:1 prediction:66 variant:1 regression:2 paired:1 sponsor:1 essentially:6 arxiv:5 represent:2 normalization:1 tailored:1 agarwal:2 achieved:1 underperforms:1 beam:2 addition:2 remarkably:1 decreased:1 winn:1 completes:1 jcl:1 goodman:1 umd:1 comment:1 simulates:1 lafferty:1 lisp:1 call:6 integer:1 structural:3 vw:2 jordan:1 iii:4 easy:1 enough:1 variety:1 affect:1 fit:1 timesteps:1 architecture:1 competing:1 suboptimal:1 reduce:2 multiclass:1 translates:1 psl:1 t0:11 a0t:5 six:1 suffer:1 demi:5 sontag:1 speech:2 searn:3 action:18 programmable:2 generally:1 involve:1 listed:1 amount:1 prepared:1 tenenbaum:1 svms:5 category:1 reduced:1 http:4 generate:4 specifies:1 exist:1 sl:1 nsf:1 schapire:1 sign:1 per:2 track:1 discrete:2 hyperparameter:1 promise:1 affected:1 key:1 four:1 terminology:1 demonstrating:1 ballesteros:1 achieving:1 graph:3 subgradient:1 ratinov:1 run:20 letter:1 you:2 named:2 reporting:2 family:1 place:1 wu:1 knowles:1 roark:1 decision:10 appendix:5 conll:9 bit:1 layer:2 followed:1 distinguish:1 guaranteed:1 encountered:1 oracle:7 constraint:1 your:1 software:1 calling:3 tag:10 declared:2 dominated:1 speed:8 simulate:2 generates:1 aspect:1 leon:1 performing:1 speedup:3 structured:27 developing:1 according:3 manning:1 kd:1 smaller:1 terminates:1 slightly:1 character:1 making:5 s1:4 icse:1 handling:1 taken:1 chunking:1 resource:2 mutually:1 turn:3 dyna:5 singer:1 initiate:1 know:1 dyer:1 fed:1 end:6 adopted:1 available:2 decomposing:1 aggre:1 chestnutt:1 generic:1 alternative:2 compiling:1 altogether:1 original:1 top:3 running:2 include:1 ensure:1 completed:1 graphical:4 linguistics:1 maintaining:2 log10:1 daum:4 giving:1 eisner:1 build:1 chinese:1 added:1 flattens:1 already:1 pegasus:1 strategy:3 primary:1 dependence:2 bagnell:3 visiting:1 thank:1 maryland:2 entity:3 street:1 me:1 considers:1 collected:1 declarative:3 assuming:1 code:6 length:3 index:1 memoization:6 providing:1 executed:1 potentially:1 ratliff:1 implementation:5 design:1 policy:25 unknown:1 markov:6 datasets:2 arc:2 finite:1 ecml:1 zadrozny:1 defining:2 situation:1 relational:2 y1:1 stack:6 arbitrary:4 cast:2 namely:1 specified:2 required:3 optimized:2 sentence:1 coherent:3 learned:5 concisely:1 marthi:1 barcelona:1 hour:2 nip:6 able:1 beyond:1 memoized:2 beating:1 challenge:3 program:21 built:2 including:3 memory:2 reliable:1 max:1 getoor:1 syed:1 predicting:3 turning:2 representing:1 improve:3 mdps:1 library:10 axis:3 created:1 attachment:1 church:3 carried:1 finley:1 naive:1 coupled:1 text:2 epoch:1 understanding:1 acknowledgement:1 interdependent:1 val:2 relative:1 embedded:2 loss:50 fully:1 mixed:1 suggestion:1 generation:1 versus:2 triple:1 sufficient:1 s0:7 treebank:1 story:1 terascale:1 tiny:1 strehl:1 translation:2 production:1 token:1 surprisingly:2 supported:1 soon:1 free:1 english:1 dcd:5 lef:1 drastically:1 formal:3 allow:1 understand:1 perceptron:11 fall:1 template:5 taking:2 saul:2 benefit:1 default:4 xn:1 transition:6 valid:1 rich:1 avoids:1 ignores:1 author:3 commonly:1 qualitatively:1 reinforcement:2 made:2 adaptive:1 historical:1 far:1 schultz:1 cope:1 transaction:1 alpha:1 obtains:1 approximate:1 implicitly:1 gracefulness:1 cutting:2 logic:4 global:2 active:1 hayes:1 mem:4 uai:4 discriminative:2 oaa:1 imitation:3 search:45 mineiro:1 decomposes:1 table:4 bonawitz:1 learn:4 terminate:1 alg:3 investigated:1 complex:5 hc:2 necessarily:1 bottou:2 da:1 nivre:2 significance:1 terminated:1 s2:2 ling:2 hyperparameters:7 repeated:1 ref:2 x1:1 augmented:1 advice:1 xu:2 aid:1 germann:1 sub:2 position:2 pereira:2 explicit:1 exponential:1 col:3 tied:2 jmlr:1 third:1 learns:1 upadhyay:1 theorem:3 minute:3 down:1 bad:1 specific:1 misconception:1 redict:5 x:4 svm:7 insignificant:1 imperatively:1 derives:2 essential:1 exists:1 workshop:1 doppa:2 effectively:1 gained:1 importance:1 magnitude:1 execution:4 margin:2 chen:1 easier:1 entropy:1 depicted:1 broecheler:1 likely:2 prevents:1 expressed:1 recommendation:1 chang:5 applies:1 radically:2 corresponds:3 declaration:1 conditional:2 goal:2 flagged:1 consequently:1 towards:1 shared:1 change:4 included:1 specifically:1 except:2 reducing:1 acting:1 called:2 total:1 pas:2 experimental:2 est:1 support:5 guo:1 collins:3 outstanding:1 evaluate:1 mcmc:1 hhe:1 ex:7 |
5,810 | 6,257 | Joint Line Segmentation and Transcription for
End-to-End Handwritten Paragraph Recognition
Th?odore Bluche
A2iA SAS
39 rue de la Bienfaisance
75008 Paris
[email protected]
Abstract
Offline handwriting recognition systems require cropped text line images for both
training and recognition. On the one hand, the annotation of position and transcript
at line level is costly to obtain. On the other hand, automatic line segmentation
algorithms are prone to errors, compromising the subsequent recognition. In this
paper, we propose a modification of the popular and efficient Multi-Dimensional
Long Short-Term Memory Recurrent Neural Networks (MDLSTM-RNNs) to
enable end-to-end processing of handwritten paragraphs. More particularly, we
replace the collapse layer transforming the two-dimensional representation into
a sequence of predictions by a recurrent version which can select one line at a
time. In the proposed model, a neural network performs a kind of implicit line
segmentation by computing attention weights on the image representation. The
experiments on paragraphs of Rimes and IAM databases yield results that are
competitive with those of networks trained at line level, and constitute a significant
step towards end-to-end transcription of full documents.
1
Introduction
Offline handwriting recognition consists in recognizing a sequence of characters in an image of
handwritten text. Unlike printed texts, images of handwriting are difficult to segment into characters.
Early methods tried to compute segmentation hypotheses for characters, for example by performing a
heuristic over-segmentation, followed by a scoring of groups of segments (e.g. in [4]). In the nineties,
this kind of approach was progressively replaced by segmentation-free methods, where a whole
word image is fed to a system providing a sequence of scores. A lexicon constrains a decoding step,
allowing to retrieve the character sequence. Some examples are the sliding window approach [25], in
which features are extracted from vertical frames of the line image, or space-displacement neural
networks [4]. In the last decade, word segmentations were abandoned in favor of complete text line
recognition with statistical language models [10].
Nowadays, the state of the art handwriting recognition systems are Multi-Dimensional Long ShortTerm Memory Recurrent Neural Networks (MDLSTM-RNNs [18]), which consider the whole image,
alternating MDLSTM layers and convolutional layers. The transformation of the 2D structure into
a sequence is computed by a simple collapse layer summing the activations along the vertical axis.
Connectionist Temporal Classification (CTC [17]) allows to train the network to both align and
recognize sequences of characters. These models have become very popular and won the recent
evaluations of handwriting recognition [9, 34, 37].
However, current models still need segmented text lines, and full document processing pipelines
should include automatic line segmentation algorithms. Although the segmentation of documents
into lines is assumed in most descriptions of handwriting recognition systems, several papers or
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
surveys state that it is a crucial step for handwriting text recognition systems [8, 28]. The need
of line segmentation to train the recognition system has also motivated several efforts to map a
paragraph-level or page-level transcript to line positions in the image (e.g. recently [7, 16]).
Handwriting recognition systems evolved from character to word segmentation, and to complete
line processing nowadays. The performance has always improved by making less segmentation
hypotheses. In this paper, we pursue this traditional tendency. We propose a model for multiline recognition based on the popular MDLSTM-RNNs, augmented with an attention mechanism
inspired from the recent models for machine translation [3], image caption generation [38], or speech
recognition [11, 12]. In the proposed model, the ?collapse? layer is modified with an attention
network, providing weights to modulate the importance given at different positions in the input. By
iteratively applying this layer to a paragraph image, the network can transcribe each text line in turn,
enabling a purely segmentation-free recognition of full paragraphs.
We carried out experiments on two public datasets of handwritten paragraphs: Rimes and IAM. We
report results that are competitive with the state-of-the-art systems, which use the ground-truth line
segmentation. The remaining of this paper is organized as follows. Section 2 presents methods related
to the one presented here, in terms of the tackled problem and modeling choices. In Section 3, we
introduce the baseline model: MDLSTM-RNNs. We expose in Section 4 the proposed modification,
and we give the details of the system. Experimental results are reported in Section 5, and followed by
a short discussion in Section 6, in which we explain how the system could be improved, and present
the challenge of generalizing it to complete documents.
2
Related Work
Our work is clearly related to MDLSTM-RNNs [18], which we improve by replacing the simple
collapse layer by a more elaborated mechanism, itself made of MDLSTM layers. The model we
propose iteratively performs an implicit line segmentation at the level of intermediate representations.
Classical text line segmentation algorithms are mostly based on image processing techniques and
heuristics. However, some methods were devised using statistical models and machine learning
techniques such as hidden Markov models [8], conditional random fields [21], or neural networks [24,
31, 32]. In our model, the line segmentation is performed implicitly and integrated in the neural
network. The intermediate features are shared by the transcription and the segmentation models, and
they are jointly trained to minimize the transcription error.
Recently, many ?attention-based? models were proposed to iteratively select in an encoded signal
the relevant parts to make the next prediction. This paradigm, already suggested by Fukushima
in 1987 [15], was successfully applied to various problems such as machine translation [3], image
caption generation [38], speech recognition [11, 12], or cropped words in scene text [27]. Attention
mechanisms were also parts of systems that can generate or recognize small pieces of handwriting
(e.g. a few digits with DRAW [20] or RAM [2], or short online handwritten sequences [19]). Our
system is designed to handle long sequences and multiple lines.
In the field of computer vision, and particularly object detection and recognition, many neural
architectures were proposed to both locate and recognize the objects, such as OverFeat [35] or spatial
transformer networks (STN [22]). In a sense, our model is quite related to the DenseCap model for
image captioning [23], itself similar to STNs. However, we do not aim at explicitly predicting line
positions, and STNs are not as good with a large amount of small objects.
We recently proposed an attention-based model to transcribe full paragraphs of handwritten text,
which predicts each character in turn [6]. Outputting one token at a time turns out to be prohibitive in
terms of memory and time consumption for full paragraphs, which typically contain about hundreds
of characters. In the proposed system, the encoded image is not summarized as a single vector at each
timestep, but as a sequence of vectors representing full text lines. It represents a huge speedup, and
a comeback to the original MDLSTM-RNN architecture, in which the collapse layer is augmented
with an MDLSTM attention network similar to the one presented in [6].
3
Handwriting Recognition with MDLSTM and CTC
MDLSTM-RNNs [18] were first introduced in the context of handwriting recognition. The Multi2
Figure 1: MDLSTM-RNN architecture for handwriting recognition. LSTM layers in four scanning
directions are followed by convolutions. The feature maps of the top layer are are summed in the
vertical dimension, and character predictions are obtained after a softmax normalization.
Dimensional Long Short-Term Memory layers scan the input in the four possible directions. The
LSTM cell inner state and output are computed from the states and outputs of previous positions in
the considered horizontal and vertical directions. Each MDLSTM layer is followed by a convolutional
layer. At the top of this network, there is one feature map for each character. These maps are collapsed
into a sequence of prediction vectors, normalized with a softmax activation. The whole architecture
is depicted in Figure 1. The Connectionist Temporal Classification (CTC [17]) algorithm, which
considers all possible labellings of the sequence, may be applied to train the network to recognize
text lines.
The 2D to 1D conversion happens in the collapsing layer, which computes a simple aggregation of
the feature maps into vector sequences, i.e. maps of height 1. This is achieved by a simple sum across
the vertical dimension:
H
X
zi =
aij
(1)
j=1
where zi is the i-th output vector and aij is the input feature vector at coordinates (i, j). All the
information in the vertical dimension is reduced to a single vector, regardless of its position in the
feature maps, preventing the recognition of multiple lines within this framework.
4
An Iterative Weighted Collapse for End-to-End Handwriting Recognition
In this paper, we replace the sum of Eqn. 1 by a weighted sum, in order to focus on a specific part of
the input. The weighted collapse is defined as follows:
(t)
zi
=
H
X
(t)
?ij aij
(2)
j=1
(t)
where ?ij are scalar weights between 0 and 1, computed at every time t for each position (i, j). The
weights are provided by a recurrent neural network, illustrated in Figure 2, enabling the recognition
of a text line at each timestep.
Figure 2: Proposed modification of the collapse layer. While the standard collapse (left, top) computes
a simple sum, the weighted collapse (right, bottom) includes a neural network to predict the weights
of a weighted sum.
3
This collapse, weighted with a neural network, may be interpreted as the ?attention? module of an
attention-based neural network similar to those of [3, 38]. This mechanism is differentiable and can
be trained with backpropagation. The complete architecture may be described as follows.
An encoder extracts feature maps from the input image I:
a = (aij )(i,j)?[1,W ]?[1,H] = Encoder(I)
(3)
where (i, j) are coordinates in the feature maps. In this work, the Encoder module is an MDLSTM
network with same architecture as the model presented in Section 3.
A weighted collapse provides a view of the encoded image at each timestep in the form of a weighted
sum of feature vector sequences. The attention network computes a score for the feature vectors at
every position:
(t)
?ij = Attention(a, ? (t?1) )
(4)
(t)
We refer to ? (t) = {?ij }(1?i?W, 1?j?H) as the attention map at time t, which computation depends
not only on the encoded image, but also on the previous attention features. A softmax normalization
is applied to each column:
X ?(t)
(t)
(t)
?ij = e?ij /
e ij0
(5)
j0
In this work, the Attention module is an MDLSTM network.
This module is applied several times to the features from the encoder. The output of the attention
module at iteration t, computed with Eqn. 2, is a sequence of feature vectors z, intended to represent
a text line. Therefore, we may see this module as a soft line segmentation neural network. The
advantages over the neural networks trained for line segmentation [13, 24, 32, 31] are that (i) it works
on the same features as those used for the transcription (multi-task encoder) and (ii) it is trained to
maximize the transcription accuracy (i.e. more closely related to the goal of handwriting recognition
systems, and easily interpretable).
A decoder predicts a character sequence from the feature vectors:
y = Decoder(z)
(1)
(2)
(6)
(T )
where z is the concatenation of z , z , . . . , z . Alternatively, the decoder may be applied to
z (i) s sub-sequences to get y (i) s and y is the concatenation of y (1) , y (2) , . . . , y (T ) .
In the standard MDLSTM architecture of Section 3, the decoder is a simple softmax. However, a
Bidirectional LSTM (BLSTM) decoder could be applied to the collapsed representations. This is
particularly interesting in the proposed model, as the BLSTM would potentially process the whole
paragraph, allowing a modeling of dependencies across text lines.
This model can be trained with CTC. If the line breaks are known in the transcript, the CTC could
be applied to the segments corresponding to each line prediction. Otherwise, one can directly apply
CTC to the whole paragraph. In this work, we opted for that strategy, with a BLSTM decoder applied
to the concatenation of all collapsing steps.
5
5.1
Experiments
Experimental Setup
We carried out the experiments on two public databases. The IAM database [29] is made of
handwritten English texts copied from the LOB corpus. There are 747 documents (6,482 lines) in the
training set, 116 documents (976 lines) in the validation set and 336 documents (2,915 lines) in the
test set. The Rimes database [1] contains handwritten letters in French. The data consist of a training
set of 1,500 paragraphs (11,333 lines), and a test set of 100 paragraphs (778 lines). We held out the
last 100 paragraphs of the training set as a validation set.
The networks have the following architecture. The encoder first computes a 2x2 tiling of the input
and alternate MDLSTM layers of 4, 20 and 100 units and 2x4 convolutions of 12 and 32 filters
with no overlap. The last layer is a linear layer with 80 outputs for IAM and 102 for Rimes. The
attention network is an MDLSTM network with 2x16 units in each direction followed by a linear
4
layer with one output, and a softmax on columns (Eqn. 5). The decoder is a BLSTM network with 256
units. Dropout is applied after each LSTM layer [33]. The networks are trained with RMSProp [36]
with a base learning rate of 0.001 and mini-batches of 8 examples, to minimize the CTC loss over
entire paragraphs. The measure of performance is the Character (or Word) Error Rate (CER%),
corresponding to the edit distance between the recognition and ground-truth, normalized by the
number of ground-truth characters.
5.2
Impact of the Decoder
In our model, the weighted collapse method is followed by a BLSTM decoder. In this experiment,
we compare the baseline system (standard collapse followed by a softmax) with the proposed model.
In order to dissociate the impact of the weighted collapse from that of the BLSTM decoder, we also
trained an intermediate architecture with a BLSTM layer after the standard collapse, but still limited
to text lines.
Table 1: Character Error Rates (%) of CTC-trained RNNs on 150 dpi images. The Standard models
are trained on segmented lines. The Attention models are trained on paragraphs.
Collapse
Standard
Standard
Attention
Decoder
Softmax
BLSTM + Softmax
BLSTM + Softmax
IAM
8.4
7.5
6.8
Rimes
4.9
4.8
2.5
The character error rates (CER%) on the validation sets are reported in Table 1 for 150dpi images.
We observe that the proposed model outperforms the baseline by a large margin (relative 20%
improvement on IAM, 50% on Rimes), and that the gain may be attributed to both the BLSTM
decoder, and the attention mechanism.
5.3
Impact of Line Segmentation
Our model performs an implicit line segmentation to transcribe paragraphs. The baseline considered
in the previous section is somehow cheating, because it was evaluated on the ground-truth line
segmentation. In this experiment, we add to the comparison the baseline models evaluated in a real
scenario where they are applied to the result of an automatic line segmentation algorithm.
Table 2: Character Error Rates (%) of CTC-trained RNNs on ground-truth lines and automatic
segmentation of paragraphs with different resolutions. The last column contains the error rate of the
attention-based model presented in this work, without an explicit line segmentation.
Database
IAM
Rimes
Resolution
150 dpi
300 dpi
150 dpi
300 dpi
GroundTruth
8.4
6.6
4.8
3.6
Line segmentation
Projection Shredding
15.5
9.3
13.8
7.5
6.3
5.9
5.0
4.5
Energy
10.2
7.9
8.2
6.6
This work
6.8
4.9
2.8
2.5
In Table 2, we report the CERs obtained with the ground-truth line positions, with three different
segmentation algorithms, and with our end-to-end system, on the validation sets of both databases with
different input resolutions. We see that applying the baseline networks on automatic segmentations
increases the error rates, by an absolute 1% in the best case. We also observe that the models are
better with higher resolutions.
Our models yield better performance than methods based on an explicit and automatic line segmentation, and comparable or better results than with ground-truth segmentation, even with a resolution
divided by two. Two factors may explain why our model yields better results than the line recognition
from ground-truth segmentation. First, the ground-truth line positions are bounding boxes that may
include some parts of adjacent lines and include irrelevant data, whereas the attention model will
focus on smaller areas. But the main reason is probably that the proposed model includes a BLSTM
operating on the whole paragraph, which may capture linguistic dependencies across text lines.
5
In Figure 3, we display a visualisation of the implicit line segmentation computed by the network.
Each color corresponds to one step of the iterative weighted collapse. On the images, the color
represents the weights given by the attention network (the transparency encodes their intensity). The
texts below are the predicted transcriptions, and chunks are colored according to the corresponding
timestep of the attention mechanism.
Figure 3: Transcription of full paragraphs of text and implicit line segmentation learnt by the network
on IAM (left) and Rimes (right). Best viewed in color.
5.4
Comparison to Published Results
In this section, we also compute the word error rates (WER%) and evaluate our models on the test
sets to compare the proposed approach to existing systems. For IAM, we applied a 3-gram language
model with a lexicon of 50,000 words, trained on the LOB, Brown and Wellington corpora.1 This
language model has a perplexity of 298 and out-of-vocabulary rate of 4.3% on the validation set (329
and 3.7% on the test set).
The results are presented in Table 3 for different input resolutions. When comparing the error rates, it
is important to note that all systems in the literature used an explicit (ground-truth) line segmentation
and a language model. [14, 26, 30] used a hybrid character/word language model to tackle the issue
of out-of-vocabulary words. Moreover, all systems except [30, 33] carefully pre-processed the line
image (e.g. corrected the slant or skew, normalized the height, ...), whereas we just normalized the
pixel values to zero mean and unit variance. Finally, [5] is a combination of four systems.
Table 3: Final results on Rimes and IAM databases
150 dpi
300 dpi
1
no language model
with language model
no language model
with language model
Bluche, 2015 [5]
Doetsch et al., 2014 [14]
Kozielski et al. 2013 [26]
Pham et al., 2014 [33]
Messina & Kermorvant, 2014 [30]
Rimes
WER% CER%
13.6
3.2
12.6
2.9
11.2
12.9
13.7
12.3
13.3
3.5
4.3
4.6
3.3
-
IAM
WER% CER%
29.5
10.1
16.6
6.5
24.6
7.9
16.4
5.5
10.9
4.4
12.2
4.7
13.3
5.1
13.6
5.1
19.1
-
The parts of the LOB corpus used in the validation and evaluation sets were removed.
6
On Rimes, the system applied to 150 dpi images already outperforms the state of the art in CER%,
while being competitive in terms of WER%. The system for 300 dpi images is comparable to the best
single system [33] in WER% with a significantly better CER%.
On IAM, the language model turned out to be quite important, probably because there is more
variability in the language.2 On 150 dpi images, the results are not too far from the state of the art
results. The WER% does not improve much on 300 dpi images, but we get a lower CER%. When
analysing the errors, we noticed that there is a lot of punctuation in IAM, which was often missed by
the attention mechanism. It may happen because punctuation marks are significantly smaller than
characters. With the attention-based collapse and the weighted sum, they will be more easily missed
than with the standard collapse, which gives the same weight to all vertical positions.
6
Discussion
Table 4: Comparison of decoding times of different methods: using ground-truth line information,
with explicit segmentation, with the attention-based method of [6] and with the system presented in
this paper.
Method
GroundTruth
Shredding
Scan, Attend and Read [6]
This Work
(crop+reco)
(segment+crop+reco)
(reco)
(reco)
Processing time (s)
0.21 ? 0.07
0.78 ? 0.26
21.2 ? 5.6
0.62 ? 0.14
The proposed model can transcribe complete paragraphs without segmentation and is orders of
magnitude faster that the model of [6] (cf. Table 4). However, the mechanism cannot handle
arbitrary reading orders. Rather, it implements a sort of implicit line segmentation. In the current
implementation, the iterative collapse runs for a fixed number of timesteps. Yet, the model can handle
a variable number of text lines, and, interestingly, the focus is put on interlines in the additional steps.
A more elegant solution should include the prediction of a binary variable indicating when to stop
reading.
Our method was applied to paragraph images, so a document layout analysis is required to detect
those paragraphs before applying the model. Naturally, the next step should be the transcription of
complex documents without an explicit or assumed paragraph extraction. The limitation to paragraphs
is inherent to this system. Indeed, the weighted collapse always outputs sequences corresponding to
the whole width of the encoded image, which, in paragraphs, may correspond to text lines. In order to
switch to full documents, several issues arise. On the one hand, the size of the lines is determined by
the size of the text block. Thus a method should be devised to only select a smaller part of the feature
maps, representing only the considered text line. This is not possible in the presented framework. A
potential solution could come from spatial transformer networks [22], performing a differentiable
crop. On the other hand, training will in practice become more difficult, not only because of the
complexity of the task, but also because the reading order of text blocks in complex documents cannot
be exactly inferred in many cases (even defining arbitrary rules may be tricky).
7
Conclusion
We have presented a model to transcribe full paragraphs of handwritten texts without an explicit
line segmentation. Contrary to classical methods relying on a two-step process (segment, then
recognize), our system directly considers the paragraph image without an elaborated pre-processing,
and outputs the complete transcription. We proposed a simple modification of the collapse layer
in the standard MDLSTM architecture to iteratively focus on single text lines. This implicit line
segmentation is learnt with backpropagation along with the rest of the network to minimize the
CTC error at the paragraph level. We reported error rates comparable to the state of the art on two
public databases. After switching from explicit to implicit character, then word segmentation for
handwriting recognition, we showed that line segmentation can also be learnt inside the transcription
model. The next step towards end-to-end handwriting recognition is now at the full page level.
2
A simple language model yields a perplexity of 18 on Rimes [5].
7
References
[1] E. Augustin, M. Carr?, E. Grosicki, J.-M. Brodin, E. Geoffrois, and F. Preteux. RIMES evaluation campaign
for handwritten mail processing. In Proceedings of the Workshop on Frontiers in Handwriting Recognition,
number 1, 2006.
[2] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention.
arXiv preprint arXiv:1412.7755, 2014.
[3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[4] Yoshua Bengio, Yann LeCun, Craig Nohl, and Chris Burges. Lerec: A NN/HMM hybrid for on-line
handwriting recognition. Neural Computation, 7(6):1289?1303, 1995.
[5] Th?odore Bluche. Deep Neural Networks for Large Vocabulary Handwritten Text Recognition. Theses,
Universit? Paris Sud - Paris XI, May 2015.
[6] Th?odore Bluche, J?r?me Louradour, and Ronaldo Messina. Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention. arXiv preprint arXiv:1604.03286, 2016.
[7] Th?odore Bluche, Bastien Moysset, and Christopher Kermorvant. Automatic line segmentation and groundtruth alignment of handwritten documents. In International Conference on Frontiers in Handwriting
Recognition (ICFHR), 2014.
[8] Vicente Bosch, Alejandro Hector Toselli, and Enrique Vidal. Statistical text line analysis in handwritten
documents. In Frontiers in Handwriting Recognition (ICFHR), 2012 International Conference on, pages
201?206. IEEE, 2012.
[9] Sylvie Brunessaux, Patrick Giroux, Bruno Grilh?res, Mathieu Manta, Maylis Bodin, Khalid Choukri,
Olivier Galibert, and Juliette Kahn. The Maurdor Project: Improving Automatic Processing of Digital
Documents. In Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pages
349?354. IEEE, 2014.
[10] Horst Bunke, Samy Bengio, and Alessandro Vinciarelli. Offline recognition of unconstrained handwritten
texts using hmms and statistical language models. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 26(6):709?720, 2004.
[11] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint
arXiv:1508.01211, 2015.
[12] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attentionbased models for speech recognition. In Advances in Neural Information Processing Systems, pages
577?585, 2015.
[13] Manolis Delakis and Christophe Garcia. text detection with convolutional neural networks. In VISAPP (2),
pages 290?294, 2008.
[14] Patrick Doetsch, Michal Kozielski, and Hermann Ney. Fast and robust training of recurrent neural networks
for offline handwriting recognition. pages ?, 2014.
[15] Kunihiko Fukushima. Neural network model for selective attention in visual pattern recognition and
associative recall. Applied Optics, 26(23):4985?4992, 1987.
[16] Basilis Gatos, Georgios Louloudis, Tim Causer, Kris Grint, Veronica Romero, Joan-Andreu S?nchez,
Alejandro Hector Toselli, and Enrique Vidal. Ground-truth production in the transcriptorium project. In
Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pages 237?241. IEEE,
2014.
[17] A Graves, S Fern?ndez, F Gomez, and J Schmidhuber. Connectionist temporal classification: labelling
unsegmented sequence data with recurrent neural networks. In International Conference on Machine
learning, pages 369?376, 2006.
[18] A. Graves and J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural
Networks. In Advances in Neural Information Processing Systems, pages 545?552, 2008.
[19] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[20] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for
image generation. arXiv preprint arXiv:1502.04623, 2015.
8
[21] David Hebert, Thierry Paquet, and Stephane Nicolas. Continuous crf with multi-scale quantization feature
functions application to structure extraction in old newspaper. In Document Analysis and Recognition
(ICDAR), 2011 International Conference on, pages 493?497. IEEE, 2011.
[22] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in
Neural Information Processing Systems, pages 2008?2016, 2015.
[23] Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for
dense captioning. arXiv preprint arXiv:1511.07571, 2015.
[24] Keechul Jung. Neural network-based text location in color images. Pattern Recognition Letters,
22(14):1503?1515, 2001.
[25] Alfred Kaltenmeier, Torsten Caesar, Joachim M Gloger, and Eberhard Mandler. Sophisticated topology
of hidden Markov models for cursive script recognition. In Document Analysis and Recognition, 1993.,
Proceedings of the Second International Conference on, pages 139?142. IEEE, 1993.
[26] Michal Kozielski, Patrick Doetsch, Hermann Ney, et al. Improvements in RWTH?s System for Off-Line
Handwriting Recognition. In Document Analysis and Recognition (ICDAR), 2013 12th International
Conference on, pages 935?939. IEEE, 2013.
[27] Chen-Yu Lee and Simon Osindero. Recursive recurrent nets with attention modeling for ocr in the wild.
arXiv preprint arXiv:1603.03101, 2016.
[28] Laurence Likforman-Sulem, Abderrazak Zahour, and Bruno Taconet. Text line segmentation of historical
documents: a survey. International Journal of Document Analysis and Recognition (IJDAR), 9(2-4):123?
138, 2007.
[29] U-V Marti and Horst Bunke. The IAM-database: an English sentence database for offline handwriting
recognition. International Journal on Document Analysis and Recognition, 5(1):39?46, 2002.
[30] R. Messina and C. Kermorvant. Surgenerative Finite State Transducer n-gram for Out-Of-Vocabulary Word
Recognition. In 11th IAPR Workshop on Document Analysis Systems (DAS2014), pages 212?216, 2014.
[31] Bastien Moysset, Pierre Adam, Christian Wolf, and J?r?me Louradour. Space displacement localization
neural networks to locate origin points of handwritten text lines in historical documents. In International
Workshop on Historical Document Imaging and Processing (HIP), 2015.
[32] Bastien Moysset, Christopher Kermorvant, Christian Wolf, and J?r?me Louradour. Paragraph text segmentation into lines with recurrent neural networks. In International Conference of Document Analysis and
Recognition (ICDAR), 2015.
[33] Vu Pham, Th?odore Bluche, Christopher Kermorvant, and J?r?me Louradour. Dropout improves recurrent
neural networks for handwriting recognition. In 14th International Conference on Frontiers in Handwriting
Recognition (ICFHR2014), pages 285?290, 2014.
[34] Joan Andreu S?nchez, Ver?nica Romero, Alejandro Toselli, and Enrique Vidal. ICFHR 2014 HTRtS:
Handwritten Text Recognition on tranScriptorium Datasets. In International Conference on Frontiers in
Handwriting Recognition (ICFHR), 2014.
[35] Pierre Sermanet, David Eigen, Xiang Zhang, Micha?l Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint
arXiv:1312.6229, 2013.
[36] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
[37] A. Tong, M. Przybocki, V. Maergner, and H. El Abed. NIST 2013 Open Handwriting Recognition and
Translation (OpenHaRT13) Evaluation. In 11th IAPR Workshop on Document Analysis Systems (DAS2014),
2014.
[38] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua
Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint
arXiv:1502.03044, 2015.
9
| 6257 |@word torsten:1 version:1 laurence:1 hector:2 open:1 tried:1 mdlstm:20 ndez:1 contains:2 score:2 document:27 interestingly:1 outperforms:2 existing:1 current:2 com:1 comparing:1 michal:2 activation:2 yet:1 subsequent:1 happen:1 romero:2 christian:2 designed:1 interpretable:1 progressively:1 intelligence:1 prohibitive:1 ivo:1 short:4 colored:1 provides:1 lexicon:2 location:1 zhang:1 height:2 wierstra:1 along:2 become:2 toselli:3 transducer:1 consists:1 wild:1 paragraph:31 inside:1 introduce:1 indeed:1 kiros:1 multi:4 sud:1 inspired:1 relying:1 salakhutdinov:1 manolis:1 window:1 spain:1 provided:1 moreover:1 project:2 evolved:1 kind:2 interpreted:1 pursue:1 transformation:1 temporal:3 every:2 multidimensional:1 choukri:1 tackle:1 exactly:1 universit:1 tricky:1 unit:4 kelvin:1 danihelka:1 before:1 attend:4 switching:1 rnns:8 micha:1 hmms:1 collapse:23 limited:1 campaign:1 lecun:2 vu:1 practice:1 block:2 implement:1 recursive:1 backpropagation:2 digit:1 displacement:2 j0:1 jan:1 area:1 rnn:2 significantly:2 printed:1 projection:1 word:11 pre:2 get:2 cannot:2 andrej:1 put:1 context:1 applying:3 transformer:3 collapsed:2 map:11 layout:1 attention:31 regardless:1 jimmy:2 survey:2 resolution:6 shredding:2 rule:1 retrieve:1 handle:3 coordinate:2 caption:3 olivier:1 samy:1 hypothesis:2 jaitly:1 origin:1 recognition:57 particularly:3 predicts:2 database:10 bottom:1 module:6 preprint:10 capture:1 coursera:1 dissociate:1 lerec:1 removed:1 alessandro:1 transforming:1 rmsprop:2 constrains:1 complexity:1 trained:13 segment:5 iapr:4 purely:1 localization:3 easily:2 joint:1 various:1 train:3 fast:1 zemel:1 tell:1 quite:2 heuristic:2 encoded:5 dmitriy:1 otherwise:1 encoder:6 favor:1 simonyan:1 paquet:1 jointly:2 itself:2 final:1 online:1 associative:1 sequence:19 differentiable:2 advantage:1 net:1 propose:3 outputting:1 relevant:1 turned:1 translate:1 description:1 captioning:2 generating:1 karol:1 adam:1 object:4 tim:1 recurrent:12 andrew:1 bosch:1 ij:6 thierry:1 transcript:3 sa:1 reco:4 predicted:1 come:1 direction:4 hermann:2 closely:1 compromising:1 filter:1 stephane:1 enable:1 public:3 require:1 rime:13 ryan:1 gatos:1 frontier:5 pham:2 considered:3 ground:12 predict:1 early:1 ruslan:1 expose:1 augustin:1 edit:1 successfully:1 weighted:13 clearly:1 always:2 aim:1 modified:1 messina:3 rather:1 bunke:2 linguistic:1 focus:4 joachim:1 improvement:2 opted:1 baseline:6 sense:1 detect:1 el:1 nn:1 bluche:6 integrated:2 typically:1 entire:1 hidden:2 visualisation:1 kahn:1 selective:1 a2ia:2 pixel:1 issue:2 classification:3 overfeat:2 art:5 spatial:3 summed:1 softmax:9 field:2 stns:2 extraction:2 koray:1 x4:1 represents:2 cer:8 yu:1 caesar:1 connectionist:3 report:2 yoshua:4 inherent:1 few:1 richard:1 recognize:5 replaced:1 intended:1 fukushima:2 william:1 detection:3 kermorvant:5 huge:1 mnih:1 khalid:1 evaluation:4 alignment:1 punctuation:2 held:1 nowadays:2 old:1 divide:1 re:1 hip:1 column:3 modeling:3 soft:1 hundred:1 recognizing:1 johnson:1 osindero:1 too:1 reported:3 dependency:2 scanning:1 learnt:3 cho:2 chunk:1 lstm:4 international:14 eberhard:1 lee:1 off:1 decoding:2 attentionbased:1 thesis:1 collapsing:2 li:1 chorowski:1 volodymyr:1 potential:1 de:1 summarized:1 includes:2 explicitly:1 depends:1 piece:1 performed:1 view:1 break:1 lot:1 script:1 competitive:3 aggregation:1 sort:1 annotation:1 simon:1 elaborated:2 minimize:3 accuracy:1 convolutional:5 variance:1 yield:4 correspond:1 handwritten:17 kavukcuoglu:1 craig:1 fern:1 published:1 odore:5 kris:1 explain:2 energy:1 naturally:1 attributed:1 handwriting:28 gain:1 stop:1 popular:3 recall:1 color:4 listen:1 improves:1 segmentation:45 organized:1 carefully:1 sophisticated:1 bidirectional:1 higher:1 zisserman:1 improved:2 juliette:1 evaluated:2 box:1 just:1 implicit:8 hand:4 eqn:3 horizontal:1 replacing:1 christopher:3 unsegmented:1 somehow:1 french:1 contain:1 normalized:4 brown:1 spell:1 kyunghyun:2 alternating:1 read:2 iteratively:4 illustrated:1 adjacent:1 lob:3 width:1 won:1 complete:6 carr:1 crf:1 performs:3 image:31 recently:3 ctc:10 significant:1 refer:1 slant:1 automatic:8 unconstrained:1 bruno:2 language:13 operating:1 alejandro:3 align:2 base:1 add:1 patrick:3 recent:3 showed:1 chan:1 irrelevant:1 perplexity:2 scenario:1 schmidhuber:2 binary:1 christophe:1 scoring:1 additional:1 paradigm:1 maximize:1 wellington:1 signal:1 ii:1 mandler:1 full:10 sliding:1 multiple:3 vinciarelli:1 blstm:11 transparency:1 segmented:2 faster:1 long:4 devised:2 divided:1 impact:3 prediction:6 crop:3 vision:1 navdeep:1 arxiv:20 iteration:1 normalization:2 represent:1 achieved:1 cell:1 cropped:2 whereas:2 crucial:1 rest:1 unlike:1 probably:2 elegant:1 bahdanau:2 contrary:1 intermediate:3 bengio:5 switch:1 zi:3 timesteps:1 architecture:10 topology:1 rwth:1 inner:1 motivated:1 sylvie:1 effort:1 karen:1 speech:3 constitute:1 deep:1 cursive:1 karpathy:1 amount:1 processed:1 reduced:1 generate:1 alfred:1 group:1 four:3 ram:1 timestep:4 imaging:1 sum:7 run:1 letter:2 wer:6 groundtruth:3 yann:2 missed:2 draw:2 comparable:3 dropout:2 layer:23 followed:7 gomez:1 tackled:1 copied:1 display:1 courville:1 iam:14 optic:1 alex:2 fei:2 scene:1 x2:1 encodes:1 performing:2 speedup:1 according:1 alternate:1 combination:1 across:3 smaller:3 ninety:1 character:19 labellings:1 rob:1 modification:4 making:1 happens:1 quoc:1 abed:1 pipeline:1 turn:3 skew:1 mechanism:8 icdar:3 fed:1 end:14 tiling:1 vidal:3 apply:1 observe:2 ocr:1 pierre:2 ney:2 batch:1 eigen:1 original:1 abandoned:1 top:3 remaining:1 include:4 cf:1 running:1 classical:2 gregor:1 noticed:1 already:2 strategy:1 costly:1 traditional:1 gradient:1 distance:1 concatenation:3 decoder:12 hmm:1 consumption:1 chris:1 me:4 mail:1 considers:2 enrique:3 reason:1 dzmitry:2 przybocki:1 mini:1 providing:2 tijmen:1 sermanet:1 difficult:2 mostly:1 setup:1 potentially:1 ba:2 implementation:1 allowing:2 conversion:1 vertical:7 convolution:2 datasets:2 markov:2 daan:1 enabling:2 finite:1 nist:1 defining:1 hinton:1 variability:1 andreu:2 frame:1 locate:2 dpi:12 arbitrary:2 intensity:1 inferred:1 introduced:1 david:2 cheating:1 paris:3 required:1 sentence:1 barcelona:1 nip:1 justin:1 suggested:1 below:1 pattern:3 reading:3 challenge:1 tb:1 icfhr:4 memory:4 max:1 overlap:1 hybrid:2 predicting:1 representing:2 improve:2 nohl:1 mathieu:2 axis:1 carried:2 shortterm:1 extract:1 text:36 joan:2 literature:1 stn:1 relative:1 georgios:1 graf:4 loss:1 fully:1 xiang:1 lecture:1 generation:4 interesting:1 limitation:1 geoffrey:1 validation:6 digital:1 translation:4 production:1 prone:1 jung:1 token:1 ij0:1 last:4 free:2 english:2 hebert:1 offline:6 aij:4 burges:1 absolute:1 dimension:3 vocabulary:4 gram:2 computes:4 preventing:1 horst:2 made:2 historical:3 far:1 transaction:1 newspaper:1 implicitly:1 jaderberg:1 transcription:11 ver:1 summing:1 corpus:3 assumed:2 xi:1 fergus:1 alternatively:1 continuous:1 iterative:3 decade:1 why:1 table:8 robust:1 nicolas:1 serdyuk:1 improving:1 complex:2 rue:1 louradour:4 da:2 main:1 dense:1 whole:7 bounding:1 arise:1 xu:1 augmented:2 x16:1 tong:1 sub:1 position:11 explicit:7 marti:1 specific:1 bastien:3 veronica:1 consist:1 workshop:6 quantization:1 importance:1 magnitude:2 labelling:1 margin:1 chen:1 generalizing:1 depicted:1 garcia:1 nchez:2 visual:3 vinyals:1 scalar:1 corresponds:1 truth:12 wolf:2 tieleman:1 extracted:1 transcribe:5 conditional:1 modulate:1 goal:1 viewed:1 kunihiko:1 towards:2 replace:2 shared:1 analysing:1 vicente:1 determined:1 except:1 corrected:1 tendency:1 la:1 experimental:2 indicating:1 select:3 aaron:1 mark:1 scan:3 oriol:1 evaluate:1 nica:1 |
5,811 | 6,258 | Incremental Boosting Convolutional Neural Network
for Facial Action Unit Recognition
Shizhong Han, Zibo Meng, Ahmed Shehab Khan, Yan Tong
Department of Computer Science & Engineering, University of South Carolina, Columbia, SC
{han38, mengz, akhan}@email.sc.edu, [email protected]
Abstract
Recognizing facial action units (AUs) from spontaneous facial expressions is still
a challenging problem. Most recently, CNNs have shown promise on facial AU
recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into
the CNN via an incremental boosting layer that selects discriminative neurons
from the lower layer and is incrementally updated on successive mini-batches. In
addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune
the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN
and the boosting CNN without incremental learning, as well as outperforming the
state-of-the-art CNN-based methods in AU recognition. The improvement is more
impressive for the AUs that have the lowest frequencies in the databases.
1 Introduction
Facial behavior is a powerful means to express emotions and to perceive the intentions of a human. Developed by Ekman and Friesen [1], the Facial Action Coding System (FACS) describes
facial behavior as combinations of facial action units (AUs), each of which is anatomically related
to the contraction of a set of facial muscles. In addition to applications in human behavior analysis, an automatic AU recognition system has great potential to advance emerging applications in
human-computer interaction (HCI), such as online/remote education, interactive games, and intelligent transportation, as well as to push the frontier of research in psychology.
Recognizing facial AUs from spontaneous facial expressions is challenging because of subtle facial
appearance changes, free head movements, and occlusions, as well as limited AU-coded training
images. As elaborated in the survey papers [2, 3], a number of approaches have been developed
to extract features from videos or static images to characterize facial appearance or geometrical
changes caused by target AUs. Most of them employed hand-crafted features, which, however, are
not designed and optimized for facial AU recognition. Most recently, CNNs have achieved incredible
success in different applications such as object detection and categorization, video analysis, and have
shown promise on facial expression and AU recognition [4, 5, 6, 7, 8, 9, 10].
CNNs contain a large number of parameters, especially as the network becomes deeper. To achieve
satisfactory performance, a large number of training images are required and a mini-batch strategy
is used to deal with large training data, where a small batch of images are employed in each iteration.
In contrast to the millions of training images employed in object categorization and detection, AUcoded training images are limited and usually collected from a small population, e.g., 48,000 images
from 15 subjects in the FERA2015 SEMAINE database [11], and 130,814 images from 27 subjects
in Denver Intensity of Spontaneous Facial Action (DISFA) database [12]. As a result, the learned
CNNs are often overfitted and do not generalize well to unseen subjects.
Boosting, e.g., AdaBoost, is a popular ensemble learning technique, which combines many ?weak?
classifiers and has been demonstrated to yield better generalization performance in AU recognition [13]. Boosting can be integrated into the CNN such that discriminative neurons are selected and
activated in each iteration of CNN learning. However, the boosting CNN (B-CNN) can overfit due
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: An overview of Incremental Boosting CNN. An incremental boosted classifier is trained iteratively.
Outputs of the FC layer are employed as input features and a subset of features (the blue nodes) are selected
by boosting. The selected features in the current iteration are combined with those selected previously (the red
nodes) to form an incremental strong classifier. A loss is calculated based on the incremental classifier and
propagated backward to fine-tune the CNN parameters. The gray nodes are inactive and thus, not selected by
the incremental strong classifier. Given a testing image, features are calculated via the CNN and fed to the
boosted classifier to predict the AU label. Best viewed in color.
to the limited training data in each mini-batch. Furthermore, the information captured in previous
iteration/batch cannot be propagated, i.e., a new set of weak classifiers is selected in every iteration
and the weak classifiers learned previously are discarded.
Inspired by incremental learning, we proposed a novel Incremental Boosting CNN (IB-CNN), which
aims to accumulate information in B-CNN learning when new training samples appear. As shown
in Figure 1, a batch of images is employed in each iteration of CNN learning. The outputs of the
fully-connected (FC) layer are employed as features; a subset of features (the blue nodes), which
is discriminative for recognizing the target AU in the current batch, is selected by boosting. Then,
these selected features are combined with the ones selected previously (the red nodes) to form an
incremental strong classifier. The weights of active features, i.e., both the blue and the red nodes,
are updated such that the features selected most of the time have higher weights. Finally, a loss,
i.e., the overall classification error from both weak classifiers and the incremental strong classifier,
is calculated and backpropagated to fine-tune the CNN iteratively. The proposed IB-CNN has a
complex decision boundary due to boosting and is capable of alleviating the overfitting problem for
the mini-batches by taking advantage of incremental learning.
In summary, this paper has three major contributions. (1) Feature selection and classification are
integrated with CNN optimization in a boosting CNN framework. (2) A novel incremental boosted
classifier is updated iteratively by accumulating information from multiple batches. (3) A novel loss
function, which considers the overall classification error of the incremental strong classifier and
individual classification errors of weak learners, is developed to fine-tune the IB-CNN.
Experimental results on four benchmark AU-coded databases, i.e., Cohn-Kanade (CK) [25] databse,
FERA2015 SEMAINE database [11], FERA2015 BP4D database [11], and Denver Intensity of
Spontaneous Facial Action (DISFA) database [12] have demonstrated that the proposed IB-CNN
significantly outperforms the traditional CNN model as well as the state-of-the-art CNN-based methods for AU recognition. Furthermore, the performance improvement of the infrequent AUs is more
impressive, which demonstrates that the proposed IB-CNN is capable of improving CNN learning
with limited training data. In addition, the performance of IB-CNN is not sensitive to the number of
neurons in the FC layer and the learning rate, which are favored traits in CNN learning.
2 Related Work
As detailed in the survey papers [2, 3], various human-designed features are adopted in recognizing facial expressions and AUs including Gabor Wavelets [13], Local Binary Patterns (LBP) [14],
Histogram of Oriented Gradients (HOG) [15], Scale Invariant Feature Transform (SIFT) features [16], Histograms of Local Phase Quantization (LPQ) [17], and their spatiotemporal extensions [17, 18, 19]. Recently, feature learning approaches including sparse coding [20] and deep
learning [4, 5, 6, 7, 8, 9, 10, 21] have been devoted to recognizing facial expressions and AUs.
Among the feature learning based methods, CNNs [4, 5, 6, 7, 8, 9, 10] have attracted increasing
attention. Gudi et al. [9] used a pre-processing method with local and global contrast normalization
2
to improve the inputs of CNNs. Fasel [4] employed multi-size convolutional filters to learn multiscale features. Liu et al [7] extracted spatiotemporal features using the 3D CNN. Jung et al. [8]
jointly fine-tuned temporal appearance and geometry features. Jaiswal and Valstar [10] integrated
bi-directional long-term memory neural networks with the CNN to extract temporal features.
Most of CNN-based methods make decisions using inner product of the FC layer. A few approaches
developed new objective functions to improve recognition performance. Tang [22, 6] replaced the
softmax loss function with an SVM for optimization. Hinton et al. [23] utilized a dropout technique
to reduce overfitting by dropping out some neuron activations from the previous layer, which can
be seen as an ensemble of networks sharing the same weights. However, the dropout process is
random regardless the discriminative power of individual neurons. In contrast, the proposed IB-CNN
effectively selects the more discriminative neurons and drops out noisy or redundant neurons.
Medera and Babinec [24] adopted incremental learning using multiple CNNs trained individually
from different subsets and additional CNNs are trained given new samples. Then, the prediction
is calculated by weighted majority-voting of the outputs of all CNNs. However, each CNN may
not have sufficient training data, which is especially true with limited AU-coded data. Different
from [24], the IB-CNN has only one CNN trained along with an incremental strong classifier, where
weak learners are updated over time by accumulating information from multiple batches. Liu et
al. [21] proposed a boosted deep belief network for facial expression recognition, where each weak
classifier is learned exclusively from an image patch. In contrast, weak classifiers are selected from
an FC layer in the proposed IB-CNN and thus, learned from the whole face.
3 Methodology
As illustrated in Figure 1, an IB-CNN model is proposed to integrate boosting with the CNN at the
decision layer with an incremental boosting algorithm, which selects and updates weak learners over
time as well as constructs an incremental strong classifier in an online learning manner. There are
three major steps for incremental boosting: selecting and activating neurons (blue nodes) from the
FC layer by boosting, combining the activated neurons from different batches (blue and red nodes)
to form an incremental strong classifier, and fine-tuning the IB-CNN by minimizing the proposed
loss function. In the following, we start with a brief review of CNNs and then, describe the three
steps of incremental boosting in detail.
3.1 A Brief Review of CNNs
A CNN consists of a stack of layers such as convolutional layers, pooling layers, rectification layers,
FC layers, and a decision layer and transforms the input data into a highly nonlinear representation.
Ideally, learned filters should activate the image patches related to the recognition task, i.e., detecting
AUs in this work. Neurons in an FC layer have full connections with all activations in the previous
layer. Finally, high-level reasoning is done at the decision layer, where the number of outputs is
equal to the number of target classes. The score function used by the decision layer is generally
the inner product of the activations in the FC layer and the corresponding weights. During CNN
training, a loss layer is employed after the decision layer to specify how to penalize the deviations
between the predicted and true labels, where different types of loss functions have been employed,
such as softmax, SVM, and sigmoid cross entropy. In this paper, we substitute the inner-product
score function with a boosting score function to achieve a complex decision boundary.
3.2 Boosting CNN
In a CNN, a mini-batch strategy is often used to handle large training data. Let X = [x1 , ..., xM ]
be the activation features of a batch with M training images, where the dimension of the activation
feature vector xi is K, and y = [y1 , ..., yM ], yi ? {?1, 1} is a vector storing the ground truth
labels. With the boosting algorithm, the prediction is calculated by a strong classifier H(?) that is
the weighted summation of weak classifiers h(?) as follows:
H(xi ) =
K
X
j=1
?j h(xij , ?j ); h(xij , ?j ) = p
f (xij , ?j )
f (xij , ?j )2 + ? 2
(1)
where xij ? xi is the j th activation feature of the ith image. Each feature corresponds to a candidate
weak classifier h(xij , ?j ) with output in the range of (-1,1). ? f (?)2 2 is used to simulate a sign(?)
f (?) +?
function to compute the derivative for gradient descent optimization. In this work, f (xij , ?j ) ? R
is defined as a one-level decision tree (a decision stump) with the threshold of ?j , which has been
widely used in AdaBoost. The parameter ? in Eq. 1 is employed to control the slope of function
3
f (?)
f (?)2 +? 2
?
?
c , where ? is the standard
to ?2 . ?j ? 0 is the weight of
and can be set according to the distribution of f (?) as ? =
deviation of f (?) and c is a constant. In this work, ? is empirically set
P
the j th weak classifier and K
j=1 ?j = 1. When ?j = 0, the corresponding neuron is inactive and
will not go through the feedforward and backpropagation process.
Traditional boosting algorithms only consider the loss of the strong classifier, which can be dominated by some weak classifiers with large weights, potentially leading to overfitting. To account for
classification errors from both the strong classifier and the individual classifiers, the loss function is
defined as the summation of a strong-classifier loss and a weak-classifier loss as follows:
?B = ??B
strong + (1 ? ?)?weak
(2)
where ? ? [0, 1] balances the strong-classifier loss and the weak-classifier loss.
The strong-classifier loss is defined as the Euclidean distance between the prediction and the
groundtruth label:
M
X
?B
strong =
1
M
(H(xi ) ? yi )2
(3)
i=1
The weak-classifier loss is defined as the summation
of the individual losses of all weak classifiers:
M
?weak =
2
1 X X
h(xij , ?j ) ? yi
M N i=1 1?j?K
(4)
?j >0
where the constraint ?j > 0 excludes inactive neurons when calculating the loss.
Driven by the loss ?B defined in Eq. 2, the B-CNN can be iteratively fine-tuned by backpropagation
as illustrated in the top of Figure 2. However, the information captured previously, e.g., the weights
and thresholds of the active neurons, is discarded for a new batch. Due to limited data in each minibatch, the trained B-CNN can be overfitted.
3.3 Incremental Boosting
Figure 2: A comparison of the IB-CNN and the B-CNN structures. For clarity, the illustration of IB-CNN
or B-CNN starts from the FC layer (the cyan nodes). The blue nodes are active nodes selected in the current
iteration; the red nodes are the active nodes selected from previous iterations; and the gray nodes are inactive.
Incremental learning can help to improve the prediction performance and to reduce overfitting. As
illustrated in the bottom of Figure 2, both of the blue nodes selected in the current iteration and the
red nodes selected previously are incrementally combined to form an incremental strong classifier
HIt at the tth iteration:
(t ? 1)H t?1 (xt?1 ) + H t (xt )
HIt (xti ) =
I
i
i
t
(5)
where HIt?1 (xti ) is the incremental strong classifier obtained at the (t ? 1)th iteration; and H t (xti )
is the boosted strong classifier estimated in the current iteration.
Substituting Eq. 1 into Eq. 5, we have
HIt (xti ) =
K
X
?tj ht (xtij ; ?tj ); ?tj =
j=1
(t ? 1)?t?1
+?
? tj
j
t
(6)
where ?
?tj is the weak classifier weight calculated in the tth iteration by boosting and ?tj is the
cumulative weight considering previous iterations. As shown in Figure 3, ht?1 (?) has been updated
4
Algorithm 1 Incremental Boosting Algorithm for the IB-CNN
Input: The number of iterations (mini-batches) T and activation features X with the size of M ? K,
where M is the number of images in a mini-batch and K is the dimension of the activation
feature vector for one image.
1: for each input activation j from 1 to K do
2:
?1j = 0
3: end for
4: for each mini-batch t from 1 to T do
5:
Feed-forward to the fully connected layer;
? t based on the standard AdaBoost;
6:
Select active features by boosting and calculate weights ?
7:
Update the incremental strong classifier as Eq. 6;
8:
Calculate the overall loss of IB-CNN as Eq. 8;
9:
Backpropagate the loss based on Eq. 9 and Eq. 10;
10:
Continue backpropagation to lower layers.
11: end for
to ht (?) by updating the threshold ?t?1
to ?tj . If the j th weak classifier was not selected before, ?tj is
j
th
estimated in the t iteration by boosting. Otherwise, ?tj will be updated from the previous iteration
after backpropagation as follows:
H t?1
?tj = ?t?1
? ??
j
where ? is the learning rate.
?? I
??t?1
j
...
HIt
Then, the incremental strong classifier
is updated over time. As illustrated in Figure 3, if a
neuron is activated in the current iteration, the
corresponding weight will increase; otherwise,
it will decrease. The summation of weights of
all weak classifiers will be normalized to 1.
Hence, the weak classifiers selected most of the
time, i.e., effective for most of mini-batches,
will have higher weights. Therefore, the overall
loss of IB-CNN is calculated as
where ?IB
strong =
1
M
(7)
?IB
PM
t t
i=1 (HI (xi )
...
H It
...
Figure 3: An illustration of constructing the incremen-
tal strong classifier. Squares represent neuron activations. The gray nodes are inactive; while the blue and
red nodes are active nodes selected in the current iteration and previous iterations, respectively.
IB
= ??strong + (1 ? ?)?weak
(8)
? yit )2 .
Compared to the B-CNN, the IB-CNN exploits the information from all mini-batches. For testing, IB-CNN uses the incremental strong classifier, while the B-CNN employs the strong classifier
learned from the last iteration.
3.4 IB-CNN Fine-tuning
A stochastic gradient descent method is utilized for fine-tuning the IB-CNN, i.e., updating IB-CNN
parameters, by minimizing the loss in Eq. 8. The descent directions for xtij and ?tj can be calculated
as follows:
t
t
t
IB
IB
t
IB
?h (xij ; ?j )
??strong ?HI (xi )
??
??
=?
+ (1 ? ?) t weak
?xtij
?HIt (xti ) ?xtij
?h (xtij ; ?tj )
?xtij
M
M
IB
IB
IB
t t
X
X
??
??
??weak ?ht (xtij ; ?tj )
strong ?HI (xi )
?
=
+
(1
?
?)
??tj
?HIt (xti ) ??tj
?ht (xtij ; ?tj )
??tj
i=1
i=1
where
??IB
?xtij
and
??IB
??tj
(9)
(10)
are only calculated for the active nodes of incremental boosting (the red and
IB
blue nodes in Figure 3). ??
can be further backpropagated to the lower FC layers and convolu?xtij
tional layers. The incremental boosting algorithm for the IB-CNN is summarized in Algorithm 1.
4 Experiments
To evaluate effectiveness of the proposed IB-CNN model, extensive experiments have been conducted on four benchmark AU-coded databases. The CK database [25] contains 486 image sequences from 97 subjects and has been widely used for evaluating the performance of AU recognition. In addition, 14 AUs were annotated frame-by-frame [30] for training and evaluation. The
FERA2015 SEMAINE database [11] contains 6 AUs and 31 subjects with 93,000 images. The
FERA2015 BP4D database [11] has 11 AUs and 41 subjects with 146,847 images. The DISFA
database [12] has 12 labeled AUs and 27 subjects with 130,814 images.
5
4.1 Pre-Processing
Face alignment is conducted to reduce variation in face scale and in-plane rotation across different
facial images. Specifically, the face regions are aligned based on three fiducial points: the centers of
the two eyes and the mouth, and scaled to a size of 128 ? 96. In order to alleviate face pose variations, especially out-of-plane rotations, face images are further warped to a frontal view based on
landmarks that are less affected by facial expressions including landmarks along the facial contour,
two eye centers, the nose tip, the mouth center, and on the forehead. A total of 23 landmarks that
are less affected by facial muscle movements are selected as control points to warp the face region
to the mean facial shape calculated from all images 1 .
Time sequence normalization is used to reduce identity-related information and highlight appearance
and geometrical changes due to activation of AUs. Particularly, each image is normalized based on
the mean and the standard deviation calculated from a short video sequence containing at least 800
continuous frames at a frame rate of 30fps 2 .
4.2 CNN Implementation Details
The proposed IB-CNN is implemented based on a modification of cifar10_quick in Caffe [28]. As
illustrated in Figure 1, the preprocessed facial images are fed into the network as input. The IBCNN consists of three stacked convolutional layers with activation functions, two average pooling
layers, an FC layer, and the proposed IB layer to predict the AU label. Specifically, the first two
convolutional layers have 32 filters with a size of 5 ? 5 and a stride of 1. Then, the output feature
maps are sent to a rectified layer followed by the average pooling layer with a downsampling stride
of 3. The last convolutional layer has 64 filters with a size of 5 ? 5, and the output 9 ? 5 feature
maps are fed into an FC layer with 128 nodes. The outputs of the FC layer are sent to the proposed
IB layer. The stochastic gradient descent, with a momentum of 0.9 and a mini-batch size of 100, is
used for training the CNN for each target AU.
4.3 Experimental Results
To demonstrate effectiveness of the proposed IB-CNN, two baseline methods are employed for comparison. The first method, denoted as CNN, is a traditional CNN model with a sigmoid cross entropy
decision layer. The second method, denoted as B-CNN, is the boosting CNN described in Section 3.2.
Both CNN and B-CNN have the same architecture as the IB-CNN with different decision layers.
Performance evaluation on the SEMAINE database: All the models compared were trained
on the training set and evaluated on the validation set. The training-testing process was repeated
5 times. The mean and standard deviation of F1 score and two-alternative forced choice (2AFC)
score are calculated from the 5 runs for each target AU. As shown in Table 1, the proposed
IB-CNN outperforms the traditional CNN in term of the average F1 score (0.416 vs 0.347) and the
average 2AFC score (0.775 vs 0.735). Not surprisingly, IB-CNN also beats B-CNN obviously: the
average F1 score increases from 0.310 (B-CNN) to 0.416 (IB-CNN) and the average 2AFC score
increases from 0.673 (B-CNN) to 0.775 (IB-CNN), thanks to incremental learning over time. In
addition, IB-CNN considering both strong and weak classifier losses outperforms the one with only
strong-classifier loss, denoted as IB-CNN-S. Note that, IB-CNN achieves a significant improvement
for recognizing AU28 (Lips suck), which has the least number of occurrences (around 1.25%
positive samples) in the training set, from 0.280 (CNN) and 0.144 (B-CNN) to 0.490 (IB-CNN) in
terms of F1 score. The performance of B-CNN is the worst for infrequent AUs due to the limited
positive samples in each mini-batch. In contrast, the proposed IB-CNN improves CNN learning
significantly with limited training data.
Table 1: Performance comparison of CNN, B-CNN, IB-CNN-S, and IB-CNN on the SEMAINE database in
terms of F1 and 2AFC. The format is mean?std. PPos: percentage of positive samples in the training set.
AUs
PPos
AU2
AU12
AU17
AU25
AU28
AU45
AVG
13.5%
17.6%
1.9%
17.7%
1.25%
19.7%
-
CNN
F1
2AFC
0.314?0.065 0.715?0.076
0.508?0.023 0.751?0.009
0.288?0.020 0.767?0.014
0.358?0.033 0.635?0.011
0.280?0.111 0.840?0.076
0.333?0.036 0.702?0.022
0.347?0.026 0.735?0.014
B-CNN
F1
2AFC
0.241?0.073 0.646?0.060
0.555?0.007 0.746?0.013
0.204?0.048 0.719?0.036
0.407?0.006 0.618?0.011
0.144?0.092 0.639?0.195
0.311?0.016 0.668?0.019
0.310?0.015 0.673?0.028
1
IB-CNN-S
F1
2AFC
0.414?0.016 0.812?0.010
0.549?0.016 0.773?0.007
0.248?0.048 0.767?0.011
0.378?0.009 0.638?0.011
0.483?0.069 0.898?0.006
0.401?0.009 0.738?0.010
0.412?0.018 0.771?0.003
IB-CNN
F1
2AFC
0.410?0.024 0.820?0.009
0.539?0.013 0.777?0.005
0.248?0.007 0.777?0.012
0.401?0.014 0.638?0.003
0.490?0.078 0.904?0.011
0.398?0.005 0.734?0.005
0.416?0.018 0.775?0.004
For the CK, SEMAINE, and DISFA databases, 66 landmarks are detected [26] for face alignment and
warping. For the BP4D database, the 49 landmarks provided in the database are used for face alignment.
2
Psychological studies show that each AU activation ranges from 48 to 800 frames at 30fps [27].
6
Table 2: Performance comparison of CNN, B-CNN, and IB-CNN on the DISFA database in terms of F1 score
and 2AFC score. The format is mean?std. PPos: percentage of positive samples in the whole database.
AUs
PPos
AU1
AU2
AU4
AU5
AU6
AU9
AU12
AU15
AU17
AU20
AU25
AU26
AVG
6.71%
5.63%
18.8%
2.09%
14.9%
5.45%
23.5%
6.01%
9.88%
3.46%
35.2%
19.1%
-
CNN
F1
2AFC
0.257?0.200 0.724?0.116
0.346?0.226 0.769?0.119
0.515?0.208 0.820?0.116
0.195?0.129 0.780?0.154
0.619?0.072 0.896?0.042
0.340?0.131 0.859?0.081
0.718?0.063 0.943?0.028
0.174?0.132 0.586?0.174
0.281?0.154 0.678?0.125
0.134?0.113 0.604?0.155
0.716?0.111 0.890?0.064
0.563?0.152 0.810?0.073
0.405?0.055 0.780?0.036
B-CNN
F1
2AFC
0.259?0.150 0.780?0.079
0.333?0.197 0.835?0.085
0.446?0.186 0.793?0.083
0.184?0.114 0.749?0.279
0.596?0.086 0.906?0.040
0.331?0.115 0.895?0.057
0.686?0.083 0.913?0.030
0.224?0.120 0.753?0.091
0.330?0.132 0.763?0.086
0.184?0.101 0.757?0.083
0.670?0.064 0.844?0.049
0.507?0.131 0.797?0.054
0.398?0.059 0.815?0.031
IB-CNN
F1
2AFC
0.327?0.204 0.773?0.119
0.394?0.219 0.849?0.073
0.586?0.104 0.886?0.060
0.312?0.153 0.887?0.076
0.624?0.069 0.917?0.026
0.385?0.137 0.900?0.057
0.778?0.047 0.953?0.020
0.135?0.122 0.511?0.226
0.376?0.222 0.742?0.148
0.126?0.069 0.628?0.151
0.822?0.076 0.922?0.063
0.578?0.155 0.876?0.039
0.457?0.067 0.823?0.031
Table 3: Performance comparison with the state-of-the-art methods on four benchmark databases in terms of
common metrics. ACC: Average classification rate.
CK
Methods
AAM [29]
Gabor+DBN [30]
LBP [32]
ACC
0.955
0.933
0.949
SEMAINE
Methods
LGBP [11]
CNN [9]
DLA-SIFT [16]
F1
0.351
0.341
0.435
BP4D
Methods
LGBP [11]
CNN [9]
DLA-SIFT [16]
F1
0.580
0.522
0.591
CNN (baseline)
IB-CNN
0.937
0.951
CNN (baseline)
IB-CNN
0.347
0.416
CNN (baseline)
IB-CNN
0.510
0.578
DISFA
Methods
2AFC
Gabor [12]
N/A
BGCS [31]
N/A
LPQ [17]
0.810
ML-CNN [33]
0.757
CNN (baseline) 0.780
IB-CNN
0.825
ACC
0.857
0.868
N/A
0.846
0.839
0.858
Performance evaluation on the DISFA database: A 9-fold cross-validation strategy is employed
for the DISFA database, where 8 subsets of 24 subjects were utilized for training and the remaining
one subset of 3 subjects for testing. For each fold, the training-testing process was repeated 5 times.
The mean and standard deviation of the F1 score and the 2AFC score are calculated from the 5 ? 9
runs for each target AU and reported in Table 2. As shown in Table 2, the proposed IB-CNN improves
the performance from 0.405 (CNN) and 0.398 (B-CNN) to 0.457 (IB-CNN) in terms of the average
F1 score and from 0.780 (CNN) and 0.815 (B-CNN) to 0.823 (IB-CNN) in terms of 2AFC score.
Similar to the results on the SEMAINE database, the performance improvement of the infrequent
AUs is more impressive. AU5 (upper lid raiser) has the least number of occurrences, i.e., 2.09%
positive samples, in the DISFA database. The recognition performance increases from 0.195 (CNN)
and 0.184 (B-CNN) to 0.312 (IB-CNN) in terms of the average F1 score.
Comparison with the State-of-the-Art methods: We further compare the proposed IB-CNN with
the state-of-the-art methods, especially the CNN-based methods, evaluated on the four benchmark
databases using the metrics that are common in those papers 3 . As shown in Tables 3, the performance of IB-CNN is comparable with the state-of-the-art methods and more importantly, outperforms the CNN-based methods.
4.4 Data Analysis
F1 Score
0.5
Data analysis of the parameter ?: The value of ? can affect
the slope of the simulated sign(?) function and consequently,
0.4
the gradient and optimization process. When ? is smaller than
0.5, the simulation is more similar to the real sign(?), but the
0.3
derivative is near zero for most of the input data, which can
cause slow convergence or divergence. An experiment was
0.2
conducted to analyze the influence of ? = ?c in Eq. 1. Specifi0.1
0.5
1
2
4
8
16
cally, an average F1 score is calculated from all AUs in the SEValue of c in function ?
MAINE database while varying the value of c. As illustrated in
Figure 4, the recognition performance in terms of the average Figure 4: Recognition performance
versus the choice of ?.
F1 score is robust to the choice of ? when c ranges from 0.5 to
?
16. In our experiment, ? is set to half of the standard deviation 2 , empirically.
Data Analysis of the number of input neurons in the IB layer: Selecting an exact number of
nodes for the hidden layers remains an open question. An experiment was conducted to demonstrate
that the proposed IB-CNN is insensitive to the number of input neurons. Specifically, a set of IB3
Since the testing sets of the SEMAINE and BP4D database are not available, the IB-CNN is compared
with the method reported on the validation sets.
7
CNNs, with the number of input neurons of 64, 128, 256, 512, 1042, and 2048, were trained and
tested on the SEMAINE database. For each IB-CNN, the average F1 score is computed over 5 runs
for each AU. As shown in Figure 5, the B-CNN and especially, the proposed IB-CNN is more robust
to the number of input neurons compared to the traditional CNN since a small set of neurons are
active in contrast to the FC layer in the traditional CNN.
CNN
AU2
F1 Score
0.5
IB-CNN
AU17
0.4
0.5
0.4
0.3
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0.2
0.1
0
64
128 256 512 1024 2048
AU25
0.5
F1 Score
B-CNN
AU12
0.6
0
64
128
256
0.6
256
512 1024 2048
AU45
0.4
0.4
0.3
128
0.5
0.5
0.4
64
512 1024 2048
AU28
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
64
128
256
512 1024 2048
0.1
0
64
128
256
64
512 1024 2048
128
256
512 1024 2048
Figure 5: Recognition performance versus the number of input neurons in the IB layer.
5 Conclusion and Future Work
0.8
IB-CNN
CNN
0.6
F1 Score
Data analysis of learning rate ?: Another issue in CNNs
is the choice of the learning rate ?. The performance of
the IB-CNN at different learning rates is depicted in Figure 6 in terms of the average F1 score calculated from all
AUs on the SEMAINE database. Compared to the traditional CNN, the proposed IB-CNN is less sensitive to the
value of the learning rate.
0.4
0.2
0
?11
?10
?9
?8
?7
In this paper, a novel IB-CNN was proposed to integrate
Log Learning Rate
boosting classification into a CNN for the application of
AU recognition. To deal with limited positive samples in a Figure 6: Recognition performance versus
mini-batch, an incremental boosting algorithm was devel- the learning rate ?.
oped to accumulate information from multiple batches over time. A novel loss function that accounts
for errors from both the incremental strong classifier and individual weak classifiers is proposed to
fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated
that the IB-CNN achieves significant improvement over the traditional CNN, as well as the stateof-the-art CNN-based methods for AU recognition. Furthermore, the IB-CNN is more effective in
recognizing infrequent AUs with limited training data. The IB-CNN is a general machine learning
method and can be adapted to other learning tasks, especially those with limited training data. In the
future, we plan to extend it to multitask learning by replacing the binary classifier with a multiclass
boosting classifier.
Acknowledgment
This work is supported by National Science Foundation under CAREER Award IIS-1149787.
References
[1] Ekman, P., Friesen, W.V., Hager, J.C.: Facial Action Coding System: the Manual. Research Nexus, Div.,
Network Information Research Corp., Salt Lake City, UT (2002)
[2] Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual,
and spontaneous expressions. IEEE T-PAMI 31(1) (Jan. 2009) 39?58
[3] Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: A survey of registration,
representation and recognition. IEEE T-PAMI 37(6) (June 2015) 1113?1133
[4] Fasel, B.: Head-pose invariant facial expression recognition using convolutional neural networks. In:
ICMI. (2002) 529?534
[5] Rifai, S., Bengio, Y., Courville, A., Vincent, P., Mirza, M.: Disentangling factors of variation for facial
expression recognition. In: ECCV. (2012) 808?822
[6] Tang, Y.: Deep learning using linear support vector machines. In: ICML. (2013)
8
[7] Liu, M., Li, S., Shan, S., Wang, R., Chen, X.: Deeply learning deformable facial action parts model for
dynamic expression analysis. In: ACCV. (2014)
[8] Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression
recognition. In: ICCV. (2015) 2983?2991
[9] Gudi, A., Tasli, H.E., den Uyl, T.M., Maroulis, A.: Deep learning based FACS action unit occurrence and
intensity estimation. In: FG. (2015)
[10] Jaiswal, S., Valstar, M.F.: Deep learning the dynamic appearance and shape of facial action units. In:
WACV. (2016)
[11] Valstar, M., Girard, J., Almaev, T., McKeown, G., Mehu, M., Yin, L., Pantic, M., Cohn, J.: FERA 2015 second facial expression recognition and analysis challenge. FG (2015)
[12] Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: Disfa: A spontaneous facial action
intensity database. IEEE Trans. on Affective Computing 4(2) (2013) 151?160
[13] Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Recognizing facial
expression: Machine learning and application to spontaneous behavior. In: CVPR. (2005) 568?573
[14] Valstar, M.F., Mehu, M., Jiang, B., Pantic, M., Scherer, K.: Meta-analysis of the first facial expression
recognition challenge. IEEE T-SMC-B 42(4) (2012) 966?979
[15] Baltrusaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalisation for
automatic action unit detection. In: FG. Volume 6. (2015) 1?6
[16] Yuce, A., Gao, H., Thiran, J.: Discriminant multi-label manifold embedding for facial action unit detection.
In: FG. (2015)
[17] Jiang, B., Martinez, B., Valstar, M.F., Pantic, M.: Decision level fusion of domain specific regions for
facial action recognition. In: ICPR, IEEE (2014) 1776?1781
[18] Zhao, G., Pieti?inen, M.: Dynamic texture recognition using local binary patterns with an application to
facial expressions. IEEE T-PAMI 29(6) (June 2007) 915?928
[19] Yang, P., Liu, Q., Metaxas, D.N.: Boosting encoded dynamic features for facial expression recognition.
Pattern Recognition Letters 30(2) (Jan. 2009) 132?139
[20] Zafeiriou, S., Petrou, M.: Sparse representations for facial expressions recognition via L1 optimization.
In: CVPR Workshops. (2010) 32?39
[21] Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In:
CVPR. (2014)
[22] Nagi, J., Di Caro, G.A., Giusti, A., Nagi, F., Gambardella, L.M.: Convolutional neural support vector
machines: hybrid visual pattern classifiers for multi-robot systems. In: ICMLA. (2012) 27?32
[23] Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks
by preventing co-adaptation of feature detectors. arXiv preprint (2012)
[24] Medera, D., Babinec, S.: Incremental learning of convolutional neural networks. In: IJCCI. (2009) 547?
550
[25] Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: FG. (2000)
46?53
[26] Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: CVPR. (2013) 3444?3451
[27] Sayette, M.A., Cohn, J.F., Wertz, J.M., Perrott, M.A., Parrott, D.J.: A psychometric evaluation of the
facial action coding system for assessing spontaneous expression. J. Nonverbal Behavior 25(3) (2001)
167?185
[28] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe:
Convolutional architecture for fast feature embedding. In: ACM MM, ACM (2014) 675?678
[29] Lucey, S., Ashraf, A.B., Cohn, J.: Investigating spontaneous facial action recognition through AAM representations of the face. In Kurihara, K., ed.: Face Recognition Book. Pro Literatur Verlag, Mammendorf,
Germany (April 2007)
[30] Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE T-PAMI 29(10) (October 2007) 1683?1699
[31] Song, Y., McDuff, D., Vasisht, D., Kapoor, A.: Exploiting sparsity and co-occurrence structure for action
unit recognition. In: FG. (2015)
[32] Han, S., Meng, Z., Liu, P., Tong, Y.: Facial grid transformation: A novel face registration approach for
improving facial action unit recognition. In: ICIP. (2014)
[33] Ghosh, S., Laksana, E., Scherer, S., Morency, L.: A multi-label convolutional neural network approach to
cross-domain action unit detection. ACII (2015)
9
| 6258 |@word multitask:1 cnn:154 open:1 simulation:1 carolina:1 contraction:1 hager:1 liu:6 contains:2 exclusively:1 selecting:2 score:25 tuned:2 outperforms:4 current:7 guadarrama:1 activation:13 attracted:1 shape:2 designed:2 drop:1 update:2 v:2 half:1 selected:19 plane:2 incredible:1 ith:1 short:1 detecting:1 boosting:35 cse:1 node:23 successive:1 along:2 fps:2 consists:2 hci:1 combine:1 fitting:1 affective:1 manner:1 au1:1 behavior:5 multi:4 inspired:1 salakhutdinov:1 xti:6 considering:2 increasing:1 becomes:1 spain:1 provided:1 lowest:1 emerging:1 developed:4 ghosh:1 transformation:1 temporal:2 every:1 voting:1 interactive:1 classifier:54 demonstrates:1 hit:7 control:2 unit:11 scaled:1 appear:1 positive:6 before:1 fasel:3 engineering:1 local:5 jiang:2 meng:3 pami:4 au:46 challenging:2 co:2 limited:12 smc:1 bi:1 range:3 tian:1 acknowledgment:1 testing:6 movellan:1 backpropagation:4 jan:2 yan:1 significantly:2 gabor:3 dla:2 intention:1 pre:2 cannot:1 selection:1 influence:1 accumulating:2 map:3 demonstrated:4 transportation:1 center:3 go:1 attention:1 regardless:1 survey:4 perceive:1 importantly:1 population:1 handle:1 embedding:2 variation:3 updated:7 spontaneous:9 target:6 infrequent:4 alleviating:1 au4:1 exact:1 us:1 recognition:37 particularly:1 utilized:3 updating:2 std:2 database:33 labeled:1 bottom:1 preprint:1 wang:1 worst:1 calculate:2 region:3 connected:2 remote:1 movement:2 jaiswal:2 overfitted:3 decrease:1 deeply:1 ideally:1 dynamic:5 trained:7 learner:3 joint:1 various:1 stacked:1 forced:1 fast:1 describe:1 activate:1 effective:2 detected:1 sc:3 caffe:2 encoded:1 widely:2 cvpr:4 otherwise:2 littlewort:1 unseen:2 transform:1 jointly:1 noisy:1 online:2 obviously:1 advantage:1 sequence:3 karayev:1 au2:3 interaction:1 product:3 adaptation:1 aligned:1 combining:1 kapoor:1 achieve:2 deformable:1 sutskever:1 convergence:1 exploiting:2 darrell:1 assessing:1 categorization:2 incremental:38 mckeown:1 object:2 help:1 pose:2 eq:10 strong:31 implemented:1 predicted:1 direction:1 annotated:1 cnns:14 filter:4 stochastic:2 human:4 incremen:1 education:1 activating:1 f1:26 generalization:1 alleviate:1 summation:4 frontier:1 extension:1 mm:1 around:1 ground:1 great:1 predict:2 substituting:1 major:2 achieves:2 estimation:1 facs:2 label:7 sensitive:2 individually:1 city:1 weighted:2 aim:1 ck:4 boosted:7 varying:1 mahmoud:1 icmla:1 june:2 improvement:6 contrast:6 baseline:5 kim:1 tional:1 integrated:3 hidden:1 selects:3 germany:1 overall:4 classification:7 among:1 issue:1 denoted:3 favored:1 stateof:1 plan:1 art:7 softmax:2 constrained:1 emotion:1 construct:1 equal:1 park:1 afc:15 icml:1 future:2 mirza:1 intelligent:1 few:1 employ:1 oriented:1 divergence:1 national:1 individual:6 comprehensive:1 replaced:1 phase:1 occlusion:1 geometry:1 detection:5 normalisation:1 highly:1 evaluation:4 alignment:3 activated:3 tj:18 devoted:1 capable:2 facial:49 tree:1 euclidean:1 girshick:1 psychological:1 deviation:6 subset:5 recognizing:8 krizhevsky:1 conducted:4 characterize:1 reported:2 spatiotemporal:2 combined:3 thanks:1 person:1 lee:1 tip:1 ym:1 containing:1 huang:1 book:1 warped:1 derivative:2 leading:1 zhao:1 li:1 account:3 potential:1 au6:1 stump:1 stride:2 coding:4 summarized:1 caused:1 view:1 analyze:1 red:8 start:2 slope:2 jia:1 elaborated:1 contribution:1 square:1 convolutional:11 ensemble:2 yield:2 nagi:2 directional:1 generalize:2 weak:29 metaxas:1 vincent:1 rectified:1 acc:3 detector:1 sharing:1 manual:1 email:1 ed:1 frequency:1 di:1 static:1 propagated:2 nonverbal:1 dataset:1 popular:1 color:1 ut:1 improves:2 subtle:1 feed:1 higher:2 friesen:2 adaboost:3 methodology:1 specify:1 response:1 april:1 done:1 evaluated:2 furthermore:3 overfit:1 hand:1 replacing:1 cohn:6 multiscale:1 nonlinear:1 zeng:1 incrementally:2 minibatch:1 gray:3 contain:1 true:2 normalized:2 hence:1 iteratively:4 satisfactory:1 semantic:1 illustrated:6 deal:2 game:1 during:1 icmi:1 demonstrate:2 l1:1 pro:1 geometrical:2 reasoning:1 image:26 novel:8 recently:3 sigmoid:2 rotation:2 common:2 ji:1 denver:2 overview:1 empirically:2 salt:1 insensitive:1 volume:1 million:1 extend:1 caro:1 trait:1 accumulate:2 significant:3 forehead:1 automatic:3 tuning:4 dbn:1 pm:1 grid:1 robot:1 han:3 impressive:3 driven:1 corp:1 verlag:1 meta:1 outperforming:1 success:1 binary:3 continue:1 yi:3 muscle:2 captured:2 seen:1 additional:1 employed:12 gambardella:1 redundant:1 ii:1 multiple:4 full:1 ahmed:1 cross:5 long:2 award:1 coded:5 prediction:4 liao:1 metric:2 arxiv:1 iteration:21 histogram:2 normalization:2 represent:1 achieved:1 penalize:1 addition:5 lbp:2 fine:11 south:1 subject:10 pooling:3 sent:2 effectiveness:2 au12:3 near:1 yang:1 feedforward:1 bengio:1 affect:3 psychology:1 architecture:2 inner:3 reduce:4 rifai:1 multiclass:1 inactive:5 expression:21 bartlett:2 giusti:1 song:1 lainscsek:1 cause:1 action:20 deep:7 generally:1 detailed:1 tune:5 transforms:1 backpropagated:2 tth:2 xij:9 percentage:2 sign:3 estimated:2 blue:9 promise:2 dropping:1 affected:2 express:1 four:6 threshold:3 yit:1 clarity:1 preprocessed:1 registration:2 ht:5 backward:1 excludes:1 run:3 letter:1 powerful:1 groundtruth:1 patch:2 lake:1 decision:13 comparable:1 dropout:2 layer:45 cyan:1 hi:3 followed:1 shan:1 courville:1 cheng:1 fold:2 adapted:1 constraint:1 tal:1 dominated:1 simulate:1 format:2 department:1 according:1 icpr:1 combination:1 describes:1 across:1 smaller:1 lid:1 modification:1 anatomically:1 invariant:2 iccv:1 den:1 rectification:1 previously:5 remains:1 nose:1 nexus:1 fed:3 au5:2 end:2 adopted:2 available:1 occurrence:4 yim:1 batch:23 alternative:1 cavallaro:1 substitute:1 top:1 remaining:1 xtij:10 calculating:1 cally:1 exploit:1 especially:6 warping:1 objective:1 question:1 strategy:3 traditional:9 fiducial:1 div:1 gradient:5 distance:1 pantic:5 simulated:1 majority:1 landmark:5 manifold:1 collected:1 considers:1 au25:3 discriminant:1 devel:1 gunes:1 relationship:1 mini:13 illustration:2 minimizing:2 balance:1 downsampling:1 disentangling:1 october:1 potentially:1 hog:1 frank:1 implementation:1 upper:1 neuron:21 discarded:2 benchmark:6 descent:4 accv:1 beat:1 hinton:2 head:2 y1:1 frame:5 stack:1 intensity:4 thiran:1 required:1 khan:1 optimized:1 connection:1 extensive:1 maine:1 icip:1 learned:7 barcelona:1 nip:1 trans:1 robinson:1 usually:1 pattern:4 xm:1 sparsity:1 challenge:2 including:3 memory:1 video:3 belief:2 mouth:2 power:1 hybrid:1 improve:3 brief:2 eye:2 ppos:4 uyl:1 extract:2 columbia:1 review:2 loss:26 fully:2 highlight:1 scherer:2 wacv:1 versus:3 validation:3 foundation:1 integrate:3 shelhamer:1 sufficient:1 storing:1 eccv:1 summary:1 jung:2 surprisingly:1 last:2 free:1 supported:1 lucey:1 deeper:1 warp:1 taking:1 face:12 sparse:2 fg:6 boundary:2 calculated:15 dimension:2 evaluating:1 cumulative:1 contour:1 zafeiriou:2 preventing:1 forward:1 avg:2 aam:2 ml:1 global:1 active:8 overfitting:4 investigating:1 discriminative:6 xi:7 inen:1 continuous:1 lpq:2 table:7 lip:1 kanade:2 learn:1 ib3:1 robust:3 career:1 improving:3 complex:2 constructing:1 domain:2 whole:2 martinez:1 repeated:2 girard:1 x1:1 crafted:1 psychometric:1 slow:1 tong:4 momentum:1 candidate:1 ib:83 wavelet:1 donahue:1 tang:2 xt:2 specific:2 sift:3 raiser:1 svm:2 fusion:1 workshop:1 quantization:1 effectively:1 texture:1 push:1 chen:1 backpropagate:1 entropy:2 depicted:1 yin:1 fc:15 appearance:5 gao:1 visual:2 corresponds:1 truth:1 extracted:1 acm:2 viewed:1 identity:1 consequently:1 ekman:2 change:3 specifically:3 kurihara:1 total:1 morency:1 experimental:4 select:1 support:2 frontal:1 ashraf:1 evaluate:1 audio:1 tested:1 srivastava:1 |
5,812 | 6,259 | Variational Bayes on Monte Carlo Steroids
Aditya Grover, Stefano Ermon
Department of Computer Science
Stanford University
{adityag,ermon}@cs.stanford.edu
Abstract
Variational approaches are often used to approximate intractable posteriors or normalization constants in hierarchical latent variable models. While often effective
in practice, it is known that the approximation error can be arbitrarily large. We
propose a new class of bounds on the marginal log-likelihood of directed latent
variable models. Our approach relies on random projections to simplify the posterior. In contrast to standard variational methods, our bounds are guaranteed to be
tight with high probability. We provide a new approach for learning latent variable
models based on optimizing our new bounds on the log-likelihood. We demonstrate
empirical improvements on benchmark datasets in vision and language for sigmoid
belief networks, where a neural network is used to approximate the posterior.
1
Introduction
Hierarchical models with multiple layers of latent variables are emerging as a powerful class of
generative models of data in a range of domains, ranging from images to text [1, 18]. The great
expressive power of these models, however, comes at a significant computational cost. Inference and
learning are typically very difficult, often involving intractable posteriors or normalization constants.
The key challenge in learning latent variable models is to evaluate the marginal log-likelihood
of the data and optimize it over the parameters. The marginal log-likelihood is generally nonconvex and intractable to compute, as it requires marginalizing over the unobserved variables.
Existing approaches rely on Monte Carlo [12] or variational methods [2] to approximate this integral.
Variational approximations are particularly suitable for directed models, because they directly provide
tractable lower bounds on the marginal log-likelihood.
Variational Bayes approaches use variational lower bounds as a tractable proxy for the true marginal
log-likelihood. While optimizing a lower bound is a reasonable strategy, the true marginal loglikelihood of the data is not necessarily guaranteed to improve. In fact, it is well known that
variational bounds can be arbitrarily loose. Intuitively, difficulties arise when the approximating
family of tractable distributions is too simple and cannot capture the complexity of the (intractable)
posterior, no matter how well the variational parameters are chosen.
In this paper, we propose a new class of marginal log-likelihood approximations for directed latent
variable models with discrete latent units that are guaranteed to be tight, assuming an optimal choice
for the variational parameters. Our approach uses a recently introduced class of random projections
[7, 15] to improve the approximation achieved by a standard variational approximation such as
mean-field. Intuitively, our approach relies on a sequence of random projections to simplify the
posterior, without losing too much information at each step, until it becomes easy to approximate
with a mean-field distribution.
We provide a novel learning framework for directed, discrete latent variable models based on
optimizing this new lower bound. Our approach jointly optimizes the parameters of the generative
model and the variational parameters of the approximating model using stochastic gradient descent
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(SGD). We demonstrate an application of this approach to sigmoid belief networks, where neural
networks are used to specify both the generative model and the family of approximating distributions.
We use a new stochastic, sampling based approximation of the variational projected bound, and show
empirically that by employing random projections we are able to significantly improve the marginal
log-likelihood estimates.
Overall, our paper makes the following contributions:
1. We extend [15], deriving new (tight) stochastic bounds for the marginal log-likelihood of
directed, discrete latent variable models.
2. We develop a ?black-box? [23] random-projection based algorithm for learning and inference
that is applicable beyond the exponential family and does not require deriving potentially
complex updates or gradients by hand.
3. We demonstrate the superior performance of our algorithm on sigmoid belief networks
with discrete latent variables in which a highly expressive neural network approximates the
posterior and optimization is done using an SGD variant [16].
2
Background setup
Let p? (X, Z) denote the joint probability distribution of a directed latent variable model parameterized
by ?. Here, X = {Xi }m
i=1 represents the observed random variables which are explained through a
set of latent variables Z = {Zi }ni=1 . In general, X and Z can be discrete or continuous. Our learning
framework assumes discrete latent variables Z whereas X can be discrete or continuous.
Learning latent variable models based on the maximum likelihood principle involves an intractable
marginalization over the latent variables. There are two complementary approaches to learning latent
variable models based on approximate inference which we discuss next.
2.1
Learning based on amortized variational inference
In variational inference, given a data point x, we introduce a distribution q? (z) parametrized by
a set of variational parameters ?. Using Jensen?s inequality, we can lower bound the marginal
log-likelihood of x as an expectation with respect to q.
log p? (x) = log
X
p? (x, z) = log
z
?
X
z
X
z
q? (z) ?
p? (x, z)
q? (z)
p? (x, z)
q? (z) ? log
= Eq [log p? (x, z) ? log q? (z)].
q? (z)
(1)
The evidence lower bound (ELBO) above is tight when q? (z) = p? (z|x). Therefore, variational
inference can be seen as a problem of computing the parameters ? from an approximating family of
distributions Q such that the ELBO can be evaluated efficiently and the approximate posterior over
the latent variables is close to the true posterior.
In the setting we consider, we only have access to samples x ? p? (x) from the underlying distribution.
Further, we can amortize the cost of inference by learning a single data-dependent variational posterior
q? (z|x) [9]. This increases the generalization strength of our approximate posterior and speeds up
inference at test time. Hence, learning using amortized variational inference optimizes the average
ELBO (across all x) jointly over the model parameters (?) as well as the variational parameters (?).
2.2
Learning based on importance sampling
A tighter lower bound of the log-likelihood can be obtained using importance sampling (IS) [4]. From
this perspective, we view q? (z|x) as a proposal distribution and optimize the following lower bound:
"
log p? (x) ? Eq
S
1 X p? (x, zi )
log
S i=1 q? (zi |x)
2
#
(2)
where each of the S samples are drawn from q? (z|x). The IS estimate reduces to the variational
objective for S = 1 in Eq. (1). From Theorem 1 of [4], the IS estimate is also a lower bound to the
true log-likelihood of a model and is asymptotically unbiased under mild conditions. Furthermore,
increasing S will never lead to a weaker lower bound.
3
Learning using random projections
Complex data distributions are well represented by generative models that are flexible and have many
modes. Even though the posterior is generally much more peaked than the prior, learning a model
with multiple modes can help represent arbitrary structure and supports multiple explanations for the
observed data. This largely explains the empirical success of deep models for representational learning,
where the number of modes grows nearly exponentially with the number of hidden layers [1, 22].
Sampling-based estimates for the marginal log-likelihood in Eq. (1) and Eq. (2) have high variance,
because they might ?miss? important modes of the distribution. Increasing S helps but one might
need an extremely large number of samples to cover the entire posterior if it is highly multi-modal.
3.1
Exponential sampling
Our key idea is to use random projections [7, 15, 28], a hash-based inference scheme that can
efficiently sample an exponentially large number of latent variable configurations from the posterior.
Intuitively, instead of sampling a single latent configuration each time, we sample (exponentially
large) buckets of configurations defined implicitly as the solutions to randomly generated constraints.
Formally, let P be the set of all posterior distributions defined over z ? {0, 1}n conditioned on x. 1
k
A random projection RA,b
: P ? P is a family of operators specified by A ? {0, 1}k?n , b ? {0, 1}k
for a k ? {0, 1, . . . , n}. Each operator maps the posterior distribution p? (z|x) to another distribution
k
RA,b
[p? (z|x)] with probability mass proportional to p? (z|x) and a support set restricted to {z : Az =
b mod 2}. When A, b are chosen uniformly at random, this defines a family of pairwise independent
hash functions H = {hA,b (z) : {0, 1}n ? {0, 1}k } where hA,b (z) = Az + b mod 2. See [7, 27]
for details.
The constraints on the space of assignments of z can be viewed as parity (XOR) constraints. The
random projection reduces the dimensionality of the problem in the sense that a subset of k variables
becomes a deterministic function of the remaining n ? k. 2 By uniformly randomizing over the choice
of the constraints, we can extend similar results from [28] to get the following expressions for the
first and second order moments of the normalization constant of the projected posterior distribution.
iid
iid
Lemma 3.1. Given A ? {0, 1}k?n ? Bernoulli( 12 ) and b ? {0, 1}k ? Bernoulli( 12 ) for k ?
{0, 1, . . . , n}, we have the following relationships:
"
#
X
EA,b
p? (x, z) = 2?k p? (x)
(3)
z:Az=b mod 2
V ar
X
p? (x, z)
= 2?k (1 ? 2?k )
X
p? (x, z)2
(4)
z
z:Az=b mod 2
Hence, a typical random projection of the posterior distribution partitions the support into 2k subsets
or buckets, each containing 2n?k states. In contrast, typical Monte Carlo estimators for variational
inference and importance sampling can be thought of as partitioning the state space into 2n subsets,
each containing a single state.
There are two obvious challenges with this random projection approach:
1. What is a good proposal distribution to select the appropriate constraint sets, i.e., buckets?
1
2
For brevity, we use binary random variables, although our analysis extends to discrete random variables.
This is the typical case: randomly generated constraints can be linearly dependent, leading to larger buckets.
3
2. Once we select a bucket, how can we perform efficient inference over the (exponentially
large number of) configurations within the bucket?
Surprisingly, using a uniform proposal for 1) and a simple mean-field inference strategy for 2), we will
provide an estimator for the marginal log-likelihood that will guarantee tight bounds for the quality
of our solution. Unlike the estimates produced by variational inference in Eq. (1) and importance
sampling in Eq. (2) which are stochastic lower bounds for the true log-likelihood, our estimate will
be a provably tight approximation for the marginal log-likelihood with high probability using a small
number of samples, assuming we can compute an optimal mean-field approximation. Given that
finding an optimal mean-field (fully factored) approximation is a non-convex optimization problem,
our result does not violate known worst-case hardness results for probabilistic inference.
3.2
Tighter guarantees on the marginal log-likelihood
Intuitively, we want to project the posterior distribution in a ?predictable? way such that key properties
are preserved. Specifically, in order to apply the results in Lemma 3.1, we will use a uniform proposal
for any given choice of constraints. Secondly, we will reason about the exponential configurations
corresponding to any given choice of constraint set using variational inference with an approximating
family of tractable distributions Q. We follow the proof strategy of [15] and extend their work on
bounding the partition function for inference in undirected graphical models to the learning setting
for directed latent variable models. We assume the following:
Assumption 3.1. The set D of degenerate distributions, i.e., distributions which assign all the
probability mass to a single configuration, is contained in Q: D ? Q.
This assumption is true for most commonly used approximating families of distributions such as
mean-field QM F = {q(z) : q(z) = q1 (z1 ) ? ? ? q` (x` )}, structured mean-field [3], etc. We now define
a projected variational inference problem as follows:
iid
iid
Definition 3.1. Let Akt ? {0, 1}k?n ? Bernoulli( 12 ) and bkt ? {0, 1}k ? Bernoulli( 21 ) for k ?
[0, 1, ? ? ? , n] and t ? [1, 2, ? ? ? , T ]. Let Q be a family of distributions such that Assumption 3.1 holds.
The optimal solutions for the projected variational inference problems, ?tk , are defined as follows:
X
log ?tk (x) = max
q? (z|x) log p? (x, z) ? log q? (z|x)
(5)
q?Q
k
z:Ak
t z=bt mod 2
We now derive bounds on the marginal likelihood p? (x) using two estimators that aggregate solutions
to the projected variational inference problems.
3.2.1
Bounds based on mean aggregation
Our first estimator is a weighted average of the projected variational inference problems.
Definition 3.2. For any given k, the mean estimator over T instances of the projected variational
inference problems is defined as follows:
Lk,T
? (x) =
T
1X k
? (x)2k .
T t=1 t
(6)
Note that the stochasticity in the mean estimator is due to the choice of our random matrices Akt , bkt
in Definition 5. Consequently, we obtain the following guarantees:
Theorem 3.1. The mean estimator is a lower bound for p? (x) in expectation:
E Lk,T
? (x) ? p? (x).
Moreover, there exists a k ? and a positive constant ? such that for any ? > 0, if T ?
then with probability at least (1 ? 2?),
?
Lk?
,T
(x) ?
4
p? (x)
.
64(n + 1)
1
?
(log(2n/?))
Proof sketch: For the first part of the theorem, note that the solution of a projected variational
k
k
problem
P for any choice of At and bt with a fixed k in Eq. (5) is a lower bound to the sum
p? (x, z) using Eq. (1). Now, we can use Eq. (3) in Lemma 3.1 to obtain the upper
k
z:Ak
t z=bt mod 2
bound in expectation. The second part of the proof extends naturally from Theorem 3.2 which we
state next. Please refer to the supplementary material for a detailed proof.
3.2.2
Bounds based on median aggregation
We can additionally aggregate the solutions to Eq. (5) using the median estimator. This gives us
tighter guarantees, including a lower bound that does not require us to take an expectation.
Definition 3.3. For any given k, the median estimator over T instances of the projected variational
inference problems is defined as follows:
k
k
k
Lk,T
(7)
M d (x) = M edian ?1 (x), ? ? ? , ?T (x) 2 .
The guarantees we obtain through the median estimator are formalized in the theorem below:
Theorem 3.2. For the median estimator, there exists a k ? > 0 and positive constant ? such that for
any ? > 0, if T ? ?1 (log(2n/?)) then with probability at least (1 ? 2?),
?
4p? (x) ? LkM d,T (x) ?
p? (x)
32(n + 1)
Proof sketch: The upperPbound follows from the application of Markov?s inequality to the positive
random variable
p? (x, z) (first moments are bounded from Lemma 3.1) and ?tk (x)
k
z:Ak
t z=bt mod 2
lower bounds this sum. The lower bound of the above theorem extends a result from Theorem 2 of
[15]. Please refer to the supplementary material for a detailed proof.
Hence, the rescaled variational solutions aggregated through a mean or median can provide tight
bounds on the log-likelihood estimate for the observed data with high probability unlike the ELBO
estimates in Eq. (1) and Eq. (2), which could be arbitrarily far from the true log-likelihood.
4
Algorithmic framework
In recent years, there have been several algorithmic advancements in variational inference and
learning using black-box techniques [23]. These techniques involve a range of ideas such as the use
of mini-batches, amortized inference, Monte Carlo gradient computation, etc., for scaling variational
techniques to large data sets. See Section 6 for a discussion. In this section, we integrate random
projections into a black-box algorithm for belief networks, a class of directed, discrete latent variable
models. These models are especially hard to learn, since the ?reparametrization trick? [17] is not
applicable to discrete latent variables leading to gradient updates with high variance.
4.1
Model specification
We will describe our algorithm using the architecture of a sigmoid belief network (SBN), a multi-layer
perceptron which is the basic building block for directed deep generative models with discrete latent
variables [21]. A sigmoid belief network consists of L densely connected layers of binary hidden
units (Z1:L ) with the bottom layer connected to a single layer of binary visible units (X). The nodes
and edges in the network are associated with biases and weights respectively. The state of the units
in the top layer (ZL ) is a sigmoid function (?(?)) of the corresponding biases. For all other layers,
the conditional distribution of any unit given its parents is represented compactly by a non-linear
activation of the linear combination of the weights of parent units with their binary state and an
additive bias term. The generative process can be summarized as follows:
L
l
l+1
p(ZL
) = ?(W l+1 ? zl+1 + bli );
i = 1) = ?(bi ); p(Zi = 1|z
p(Xi = 1|z1 ) = ?(W 1 ? z1 + b0i )
In addition to the basic SBN design, we also consider the amortized inference setting. Here, we have
an inference network with the same architecture as the SBN, but the feedforward loop running in the
reverse direction from the input (x) to the output q(zL |x).
5
Algorithm 1 VB-MCS: Learning belief networks with random projections.
VB-MCS (Mini-batches {xh }H
h=1 , Generative Network (G, ?), Inference Network (I, ?), Epochs E,
Constraints k, Instances T )
for e = 1 : E do
for h = 1 : H do
for t = 1 : T do
iid
iid
Sample A ? {0, 1}k?n ? Bernoulli( 12 ) and b ? {0, 1}k ? Bernoulli( 12 )
C, b0 ? RowReduce(A, b)
log ?tk (xh ) ? ComputeProjectedELBO(xh , G, ?, I, ?, C, b0 )
log Lk,T (xh ) ? log [Aggregate(?1k (xh ), ? ? ? , ?Tk (xh ))]
Update ?, ? ? StochasticGradientDescent(? log Lk,T (xh ))
return ?, ?
4.2
Algorithm
The basic algorithm for learning belief networks with augmented inference networks is inspired by
the wake-sleep algorithm [13]. One key difference from the wake-sleep algorithm is that there is a
single objective being optimized. This is typically the ELBO (see Eq. ( 1)) and optimization is done
using stochastic mini-batch descent jointly over the model and inference parameters.
Training consists of two alternating phases for every mini-batch of points. The first step makes a
forward pass through the inference network producing one or more samples from the top layer of the
inference network, and finally, these samples complete a forward pass through the generative network.
The reverse pass computes the gradient of the model and variational parameters with respect to the
ELBO in Eq. (1) and uses these gradient updates to perform a gradient descent step on the ELBO.
We now introduce a black-box technique within this general learning framework, which we refer to
as Variational Bayes on Monte Carlo Steroids (VB-MCS) due to the exponential sampling property.
VB-MCS requires as input a data-dependent parameter k, which is the number of variables to constrain.
At every training epoch, we first sample entries of a full-rank constraint matrix A ? {0, 1}k?n
and vector b ? {0, 1}k and then optimize for the objective corresponding to a projected variational
inference problem defined in Eq. (5). This procedure is repeated for T problem instances, and the
individual likelihood estimates are aggregated using the mean or median based estimators defined in
Eq. (6) and Eq. (7). The pseudocode is given in Algorithm 1.
For computing the projected ELBO, the inference network considers the marginal distribution of only
n ? k free latent variables. We consider the mean-field family of approximations where the free latent
variables are sampled independently from their corresponding marginal distributions. The remaining
k latent variables are specified by parity constraints. Using Gaussian elimination, the original linear
system Az = b mod 2 is reduced into a row echleon representation of the form Cz = b0 where
C = [Ikxk |A0 ] such
A0 ? {0, 1}k?(n?k) and b0 ? {0, 1}k . Finally, we read off the constrained
Lthat
n
variables as zj = i=k+1 cji zi ? b0j for j = 1, 2, ? ? ? , k where ? is the XOR operator.
5
Experimental evaluation
We evaluated the performance of VB-MCS as a black-box technique for learning discrete, directed latent
variable models for images and documents. Our test-architecture is a simple sigmoid belief network
with a single hidden layer consisting of 200 units and a visible layer. Through our experiments, we
wish to demonstrate that the theoretical advantage offered by random projections easily translates
into practice using an associated algorithm such as VB-MCS. We will compare a baseline sigmoid
belief network (Base-SBN) learned using Variational Bayes and evaluate it against a similar network
with parity constraints imposed on k latent variables (henceforth, referred as k-SBN) and learned
using VB-MCS. We now discuss some parameter settings below, which have been fixed with respect to
the best validation performance of Base-SBN on the Caltech 101 Silhouettes dataset.
Implementation details: The prior probabilities for the latent layer are specified using autoregressive
connections [10]. The learning rate was fixed based on validation performance to 3 ? 10?4 for the
generator network and reduced by a factor of 5 for the inference network. Mini-batch size was fixed
6
Table 1: Test performance evaluation of VB-MCS. Random projections lead to improvements in terms
of estimated negative log-likelihood and log-perplexity.
Dataset
Evaluation Metric Base
k=5
k=10
k=20
Vision: Caltech 101 Silhouettes
NLL
251.04
245.60
248.79
256.60
Language: NIPS Proceedings
log-perplexity
5009.79
4919.35
4919.22
4920.71
(a) Dataset Images
(b) Denoised Images
(c) Samples
Figure 1: Denoised images (center) of the actual ones (left) and sample images (right) generated from
the best k-SBN model trained on the Caltech 101 Silhouettes dataset.
to 20. Regularization was imposed by early stopping of training after 50 epochs. The optimizer used
is Adam [16]. For k-SBN, we show results for three values of k: 5, 10, and 20, and the aggregation is
done using the median estimator with T = 3.
5.1
Generative modeling of images in the Caltech 101 Silhouettes dataset
We trained a generative model for silhouette images of 28 ? 28 dimensions from the Caltech 101
Silhouettes dataset 3 . The dataset consists of 4,100 train images, 2,264 validation images and 2,307
test images. This is a particularly hard dataset due to the asymmetry in silhouettes compared to other
commonly used structured datasets. As we can see in Table 1, the k-SBNs trained using VB-MCS can
outperform the Base-SBN by several nats in terms of the negative log-likelihood estimates on the test
set. The performance for k-SBNs dips as we increase k, which is related to the empirical quality of
the approximation our algorithm makes for different k values.
The qualitative evaluation results of SBNs trained using VB-MCS and additional control variates [19]
on denoising and sampling are shown in Fig. 1. While the qualitative evaluation is subjective, the
denoised images seem to smooth out the edges in the actual images. The samples generated from the
model largely retain essential qualities such as silhouette connectivity and varying edge patterns.
5.2
Generative modeling of documents in the NIPS Proceedings dataset
We performed the second set of experiments on the latest version of the NIPS Proceedings dataset4
which consists of the distribution of words in all papers that appeared in NIPS from 1988-2003. We
performed a 80/10/10 split of the dataset into 1,986 train, 249 validation, and 248 test documents.
The relevant metric here is the average perplexity per word for D documents, given by P =
PD 1
exp ?1
i=1 Li log p(xi ) where Li is the length of document i. We feed in raw word counts per
D
document as input to the inference network and consequently, the visible units in the generative
network correspond to the (unnormalized) probability distribution of words in the document.
Table 1 shows the log-perplexity scores (in nats) on the test set. From the results, we again observe
the superior performance of all k-SBNs over the Base-SBN. The different k-SBNs have comparable
performance, although we do not expect this observation to hold true more generally for other
3
4
Available at https://people.cs.umass.edu/~marlin/data.shtml
Available at http://ai.stanford.edu/~gal/
7
datasets. For a qualitative evaluation, we sample the relative word frequencies in a document and
then generate the top-50 words appearing in a document. One such sampling is shown in Figure 2.
The bag-of-words appears to be semantically reflective of coappearing words in a NIPS paper.
6
Discussion and related work
There have been several recent advances in approximate inference and learning techniques
from both a theoretical and empirical perspective. On the empirical side, the various blackbox techniques [23] such as mini-batch updates [14], amortized inference [9] etc. are key
to scaling and generalizing variational inference
to a wide range of settings. Additionally, advancements in representational learning have
made it possible to specify and learn highly expressive directed latent variable models based
on neural networks, for e.g., [4, 10, 17, 19,
20, 24]. Rather than taking a purely variational
Figure 2: Bag-of-words for a 50 word document or sampling-based approach, these techniques
sampled from the best k-SBN model trained on the stand out in combining the computational effiNIPS Proceedings dataset.
ciency of variational techniques with the generalizability of Monte Carlo methods [25, 26].
On the theoretical end, there is a rich body of recent work in hash-based inference applied to
sampling [11], variational inference [15], and hybrid inference techniques at the intersection of
the two paradigms [28]. The techniques based on random projections have not only lead to better
algorithms but more importantly, they come with strong theoretical guarantees [5, 6, 7].
In this work, we attempt to bridge the gap between theory and practice by employing hash-based
inference techniques to the learning of latent variable models. We introduced a novel bound on the
marginal log-likelihood of directed latent variable models with discrete latent units. Our analysis
extends the theory of random projections for inference previously done in the context of discrete,
fully-observed log-linear undirected models to the general setting of both learning and inference in
directed latent variable models with discrete latent units while the observed data can be discrete or
continuous. Our approach combines a traditional variational approximation with random projections
to get provable accuracy guarantees and can be used to improve the quality of traditional ELBOs
such as the ones obtained using a mean-field approximation.
The power of black-box techniques lies in their wide applicability, and in the second half of the paper,
we close the loop by developing VB-MCS, an algorithm that incorporates the theoretical underpinnings
of random projections into belief networks that have shown tremendous promise for generative
modeling. We demonstrate an application of this idea to sigmoid belief networks, which can also
be interpreted as probabilistic autoencoders. VB-MCS simultaneously learns the parameters of the
(generative) model and the variational parameters (subject to random projections) used to approximate
the intractable posterior. Our approach can still leverage backpropagation to efficiently compute
gradients of the relevant quantities. The resulting algorithm is scalable and the use of random
projections significantly improves the quality of the results on benchmark data sets in both vision and
language domains.
Future work will involve devising random projection schemes for latent variable models with continuous latent units and other variational families beyond mean-field [24]. On the empirical side, it would
be interesting to investigate potential performance gains by employing complementary heuristics
such as variance reduction [19] and data augmentation [8] in conjunction with random projections.
Acknowledgments
This work was supported by grants from the NSF (grant 1649208) and Future of Life Institute (grant
2016-158687).
8
References
[1] Y. Bengio. Learning deep architectures for AI. Foundations and trends in ML, 2(1):1?127, 2009.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003.
[3] A. Bouchard-C?t? and M. I. Jordan. Optimization of structured mean field objectives. In UAI, 2009.
[4] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
[5] S. Ermon, C. Gomes, A. Sabharwal, and B. Selman. Low-density parity constraints for hashing-based
discrete integration. In ICML, 2014.
[6] S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman. Optimization with parity constraints: From binary
codes to discrete integration. In UAI, 2013.
[7] S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman. Taming the curse of dimensionality: Discrete
integration by hashing and optimization. In ICML, 2013.
[8] Z. Gan, R. Henao, D. E. Carlson, and L. Carin. Learning deep sigmoid belief networks with data
augmentation. In AISTATS, 2015.
[9] S. Gershman and N. D. Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the
Thirty-Sixth Annual Conference of the Cognitive Science Society, 2014.
[10] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. In ICML,
2014.
[11] S. Hadjis and S. Ermon. Importance sampling over sets: A new probabilistic inference scheme. In UAI,
2014.
[12] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural computation,
14(8):1771?1800, 2002.
[13] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The "wake-sleep" algorithm for unsupervised neural
networks. Science, 268(5214):1158?1161, 1995.
[14] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14(1):1303?
1347, 2013.
[15] L.-K. Hsu, T. Achim, and S. Ermon. Tight variational bounds via random projections and I-projections. In
AISTATS, 2016.
[16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[17] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
[18] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations. In ICML, 2009.
[19] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.
[20] A. Mnih and D. J. Rezende. Variational inference for Monte Carlo objectives. In ICML, 2016.
[21] R. M. Neal. Connectionist learning of belief networks. AIJ, 56(1):71?113, 1992.
[22] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In UAI, 2011.
[23] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, 2013.
[24] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015.
[25] T. Salimans, D. P. Kingma, and M. Welling. Markov chain Monte Carlo and variational inference: Bridging
the gap. In ICML, 2015.
[26] M. Titsias and M. L?zaro-Gredilla. Local expectation gradients for black box variational inference. In
NIPS, 2015.
[27] S. P. Vadhan et al. Pseudorandomness. In Foundations and Trends in TCS, 2011.
[28] M. Zhu and S. Ermon. A hybrid approach for probabilistic inference using random projections. In ICML,
2015.
9
| 6259 |@word mild:1 version:1 contrastive:1 q1:1 sgd:2 reduction:1 moment:2 configuration:6 score:1 uma:1 document:10 subjective:1 existing:1 activation:1 additive:1 partition:2 visible:3 update:5 hash:4 generative:14 half:1 advancement:2 devising:1 blei:3 node:1 wierstra:1 qualitative:3 consists:4 combine:1 introduce:2 pairwise:1 hardness:1 ra:2 blackbox:1 multi:2 inspired:1 salakhutdinov:1 actual:2 curse:1 increasing:2 becomes:2 spain:1 project:1 underlying:1 moreover:1 bounded:1 mass:2 what:1 interpreted:1 emerging:1 unobserved:1 finding:1 marlin:1 gal:1 guarantee:7 every:2 qm:1 partitioning:1 unit:11 zl:4 control:1 grant:3 producing:1 danihelka:1 positive:3 frey:1 local:1 encoding:1 ak:3 black:8 might:2 range:3 bi:1 directed:13 acknowledgment:1 thirty:1 zaro:1 practice:3 block:1 backpropagation:1 procedure:1 empirical:6 significantly:2 thought:1 projection:26 word:10 get:2 cannot:1 close:2 operator:3 context:1 optimize:3 map:1 deterministic:1 imposed:2 center:1 latest:1 independently:1 convex:1 formalized:1 factored:1 estimator:13 importantly:1 deriving:2 losing:1 us:2 domingo:1 trick:1 amortized:6 trend:2 particularly:2 observed:5 bottom:1 wang:1 capture:1 worst:1 sbn:11 connected:2 rescaled:1 predictable:1 pd:1 complexity:1 nats:2 trained:5 tight:8 purely:1 titsias:1 compactly:1 easily:1 joint:1 represented:2 various:1 train:2 effective:1 describe:1 monte:8 aggregate:3 heuristic:1 stanford:3 larger:1 supplementary:2 loglikelihood:1 elbo:8 jointly:3 nll:1 sequence:1 advantage:1 propose:2 product:2 relevant:2 loop:2 combining:1 poon:1 degenerate:1 representational:2 az:5 parent:2 asymmetry:1 adam:2 tk:5 help:2 derive:1 develop:1 b0:4 eq:18 strong:1 c:2 involves:1 come:2 direction:1 sabharwal:3 stochastic:7 ermon:8 material:2 elimination:1 explains:1 require:2 assign:1 generalization:1 tighter:3 secondly:1 hold:2 exp:1 great:1 algorithmic:2 elbos:1 optimizer:1 early:1 applicable:2 bag:2 bridge:1 weighted:2 hoffman:1 gaussian:1 rather:1 b0i:1 varying:1 shtml:1 conjunction:1 rezende:2 improvement:2 bernoulli:6 likelihood:25 rank:1 contrast:2 baseline:1 sense:1 inference:55 dependent:3 stopping:1 dayan:1 typically:2 entire:1 bt:4 a0:2 hidden:3 achim:1 provably:1 henao:1 overall:1 flexible:1 constrained:1 integration:3 marginal:18 field:11 once:1 never:1 ng:2 sampling:14 represents:1 icml:9 nearly:1 carin:1 unsupervised:2 peaked:1 future:2 connectionist:1 simplify:2 randomly:2 simultaneously:1 densely:1 divergence:1 individual:1 phase:1 consisting:1 attempt:1 highly:3 investigate:1 mnih:3 evaluation:6 chain:1 integral:1 edge:3 underpinnings:1 pseudorandomness:1 theoretical:5 instance:4 modeling:3 cover:1 ar:1 assignment:1 cost:2 applicability:1 subset:3 entry:1 uniform:2 too:2 randomizing:1 generalizability:1 density:1 retain:1 probabilistic:5 off:1 lee:1 connectivity:1 again:1 augmentation:2 containing:2 henceforth:1 cognitive:1 expert:1 leading:2 return:1 li:2 potential:1 summarized:1 matter:1 performed:2 view:1 bayes:5 aggregation:3 reparametrization:1 denoised:3 bouchard:1 contribution:1 ni:1 accuracy:1 xor:2 variance:3 largely:2 efficiently:3 convolutional:1 correspond:1 raw:1 ikxk:1 produced:1 iid:6 mc:12 carlo:8 definition:4 sixth:1 against:1 frequency:1 mohamed:1 obvious:1 naturally:1 proof:6 associated:2 sampled:2 gain:1 dataset:11 hsu:1 dimensionality:2 improves:1 ea:1 appears:1 feed:1 hashing:2 follow:1 specify:2 modal:1 done:4 box:8 evaluated:2 though:1 furthermore:1 b0j:1 until:1 autoencoders:2 hand:1 sketch:2 expressive:3 defines:1 mode:4 quality:5 grows:1 building:1 true:8 unbiased:1 hence:3 regularization:1 alternating:1 read:1 neal:2 please:2 lkm:1 unnormalized:1 complete:1 demonstrate:5 stefano:1 reasoning:1 ranging:1 variational:54 image:13 novel:2 recently:1 sigmoid:10 superior:2 pseudocode:1 empirically:1 exponentially:4 extend:3 approximates:1 significant:1 refer:3 ai:2 paisley:1 stochasticity:1 language:3 access:1 specification:1 etc:3 base:5 posterior:20 recent:3 perspective:2 optimizing:3 optimizes:2 reverse:2 perplexity:4 nonconvex:1 inequality:2 binary:5 arbitrarily:3 success:1 life:1 caltech:5 seen:1 additional:1 aggregated:2 paradigm:1 multiple:3 violate:1 full:1 reduces:2 sbns:5 smooth:1 involving:1 variant:1 basic:3 scalable:2 vision:3 expectation:5 metric:2 normalization:3 represent:1 cz:1 achieved:1 proposal:4 background:1 whereas:1 want:1 preserved:1 addition:1 wake:3 median:8 goodman:1 unlike:2 subject:1 undirected:2 incorporates:1 mod:8 seem:1 jordan:2 flow:1 reflective:1 vadhan:1 leverage:1 feedforward:1 split:1 easy:1 bengio:1 marginalization:1 variate:1 zi:5 architecture:5 idea:3 translates:1 steroid:2 blundell:1 expression:1 cji:1 bridging:1 bli:1 deep:7 generally:3 detailed:2 involve:2 reduced:2 http:2 generate:1 outperform:1 zj:1 nsf:1 estimated:1 per:2 discrete:19 promise:1 key:5 drawn:1 asymptotically:1 sum:3 year:1 parameterized:1 powerful:1 extends:4 family:11 reasonable:1 scaling:2 vb:12 comparable:1 bound:30 layer:12 guaranteed:3 sleep:3 annual:1 strength:1 constraint:14 constrain:1 speed:1 extremely:1 department:1 structured:3 developing:1 gredilla:1 combination:1 across:1 intuitively:4 explained:1 restricted:1 bucket:6 previously:1 discus:2 loose:1 count:1 tractable:4 end:1 available:2 apply:1 observe:1 hierarchical:3 salimans:1 appropriate:1 appearing:1 batch:6 original:1 assumes:1 remaining:2 top:3 running:1 dirichlet:1 graphical:1 gan:1 bkt:2 carlson:1 especially:1 approximating:6 society:1 gregor:2 objective:5 quantity:1 strategy:3 traditional:2 gradient:9 iclr:3 parametrized:1 considers:1 reason:1 dataset4:1 provable:1 assuming:2 length:1 code:1 relationship:1 mini:6 minimizing:1 difficult:1 setup:1 potentially:1 negative:2 ba:1 design:1 implementation:1 perform:2 upper:1 observation:1 datasets:3 markov:2 benchmark:2 descent:3 hinton:2 arbitrary:1 introduced:2 specified:3 z1:4 optimized:1 connection:1 learned:2 tremendous:1 barcelona:1 kingma:3 nip:7 able:1 beyond:2 below:2 pattern:1 appeared:1 challenge:2 max:1 including:1 explanation:1 belief:16 power:2 suitable:1 difficulty:1 rely:1 hybrid:2 zhu:1 scheme:3 improve:4 lk:6 auto:1 text:1 prior:2 epoch:3 taming:1 marginalizing:1 relative:1 fully:2 expect:1 interesting:1 proportional:1 allocation:1 grover:1 gershman:1 generator:1 validation:4 foundation:2 integrate:1 offered:1 proxy:1 principle:1 row:1 surprisingly:1 parity:5 free:2 supported:1 aij:1 bias:3 weaker:1 side:2 perceptron:1 institute:1 wide:2 burda:1 taking:1 dip:1 dimension:1 stand:1 rich:1 computes:1 autoregressive:2 forward:2 commonly:2 made:1 projected:11 selman:3 employing:3 far:1 welling:2 ranganath:2 approximate:9 implicitly:1 silhouette:8 ml:1 uai:4 gomes:3 xi:3 continuous:4 latent:38 table:3 additionally:2 learn:2 necessarily:1 complex:2 domain:2 aistats:3 linearly:1 bounding:1 arise:1 repeated:1 complementary:2 body:1 augmented:1 fig:1 referred:1 amortize:1 grosse:2 akt:2 wish:1 xh:7 exponential:4 ciency:1 lie:1 jmlr:2 learns:1 theorem:8 jensen:1 evidence:1 normalizing:1 intractable:6 exists:2 essential:1 importance:6 conditioned:1 gap:2 generalizing:1 intersection:1 tc:1 aditya:1 contained:1 gerrish:1 relies:2 conditional:1 viewed:1 consequently:2 hard:2 typical:3 specifically:1 uniformly:2 semantically:1 miss:1 lemma:4 denoising:1 pas:3 experimental:1 formally:1 select:2 support:3 people:1 brevity:1 evaluate:2 |
5,813 | 626 | A Connectionist Symbol Manipulator
That Discovers the Structure of
Context-Free Languages
Michael C. Mozer and Sreerupa Das
Department of Computer Science &
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Abstract
We present a neural net architecture that can discover hierarchical and recursive structure in symbol strings. To detect structure at multiple levels,
the architecture has the capability of reducing symbols substrings to single
symbols, and makes use of an external stack memory. In terms of formal
languages, the architecture can learn to parse strings in an LR(O) contextfree grammar. Given training sets of positive and negative exemplars,
the architecture has been trained to recognize many different grammars.
The architecture has only one layer of modifiable weights, allowing for a
straightforward interpretation of its behavior.
Many cognitive domains involve complex sequences that contain hierarchical or
recursive structure, e.g., music, natural language parsing, event perception. To illustrate, "the spider that ate the hairy fly" is a noun phrase containing the embedded noun phrase "the hairy fly." Understanding such multilevel structures requires
forming reduced descriptions (Hinton, 1988) in which a string of symbols or states
("the hairy fly") is reduced to a single symbolic entity (a noun phrase). We present
a neural net architecture that learns to encode the structure of symbol strings via
such red uction transformations.
The difficult problem of extracting multilevel structure from complex, extended
sequences has been studied by Mozer (1992), Ring (1993), Rohwer (1990), and
Schmidhuber (1992), among others. While these previous efforts have made some
863
864
Mozer and Das
d~ft demon units
input
queue
pop
stack
push
push
Figure 1: The demon model.
progress, no one has claimed victory over the problem. Our approach is based on a
new perspective-one of symbolic reduction transformations-which affords a fresh
attack on the problem.
1
A BLACKBOARD ARCHITECTURE
Our inspiration is a blackboard style architecture that works as follows. The input,
a sequence of symbols, is copied onto a blackboard-a scratch pad memory-one
symbol at a time. A set of demon, watch over the blackboard, each looking for a
specific pattern of symbols. When a demon observes its pattern, it fire" causing
the pattern to be replaced by a symbol associated with that demon, which we'll call
its identity. This process continues until the entire input string has been read or no
demon can fire. The sequence of demon firings and the final blackboard contents
specify the structure of the input.
The model we present is a simplified version of this blackboard architecture. The
blackboard is implemented as a stack. Consequently, the demons have no control
over where they write or read a symbol; they simply push and pop symbols from
the stack. The other simplification is that the demon firing is based on template
matching, rather than a more sophisticated form of pattern matching.
The demon model is sketched in Figure 1. An input queue holds the input string
to be parsed, which is gradually transferred to the stack. The top k stack symbols
are encoded in a set of dack unit&; in the current implementation, k
2. Each
demon is embodied by a special processing unit which receives input from the stack
units. The weights of each demon unit specify a pair of symbols, which the demon
unit matches against the two stack symbols. If there is a match, the demon unit
pops the top two stack symbols and pushes its identity. If no demon unit matches,
an additional unit, called the default unit, becomes active. The default unit is
responsible for transferring a symbol from the input queue onto the stack.
=
Connectionist Symbol Manipulator Discovers Structure of Context-Free Languages
865
S
S
S
X
-+ a
-+ a
-+ S
b
X
b
/\X
/\
S b
/\
a b
a
Figure 2: The rewrite rules defining a grammar that generates strings of the form
anbn and a parse tree for the string aabb.
2
PARSING CONTEXT-FREE LANGUAGES
Each demon unit reduces a pair of symbols to a single symbol. We can express
the operation of a demon as a rewrite rule of the form X --+ a b, where the lower
case letters denote symbols in the input string and upper case letters denote the
demon identities, also symbols in their own right. The above rule specifies that
when the symbols a and b appear on the top of the stack, in that order, the X
demon unit should fire, erasing those two symbols and replacing them with an X.
Demon units can respond to internal symbols (demon identities) instead of input
symbols, allowing internal symbols on the right hand side of the rule. Demon units
can also respond to individual input symbols, achieving rules of the form X --+ a.
Multiple demon units can have the same identity, leading to rewrite rules of a
more general form, e.g., X --+ a b lYe I d Z I a. This class of rewrite rules can
express a subset of context-free grammars. Figure 2 shows a sample grammar that
generates strings of the form anb n and a parse tree for the input string aabb. The
demon model essentially constructs such parse trees via the sequence of reduction
operations.
That each rule has only one or two symbols on the right hand side imposes no
limitation on the class of grammars that can be recognized. However, the demon
model does require certain knowledge about the grammars to be identified. First,
the maximum number of rewrite rules and the maximum number of rules having the
same left-hand side must be specified in advance. This is because the units have
to be allocated prior to learning. Second, the LR-class of the grammar must be
given. To explain, any context-free grammar can be characterized as LR( n), which
indicates that the strings of the grammar can be parsed from left to right with n
symbols of look ahead on the input queue. The demon model requires that n be
specified in advance. In the present work, we examine only LR(O) grammars, but
the architecture can readily be generalized to arbitrary n.
Giles et al. (1990), Sun et al. (1990), and Das, Giles, and Sun (1992) have previously
explored the learning of context-free grammars in a neural net. Their approach was
based on the automaton perspective of a recognizer, where the primary interest was
to learn the dynamics of a pushdown automaton. There has also been significant
work in context-free grammar inference using symbolic approaches. In general, these
approaches require a significant amount of prior information about the grammar
and, although theoretically sound, have not proven terribly useful in practice. A
promising exception is the recent proposal of Stolcke (1993).
...
866
Mozer and Das
3
CONTINUOUS DYNAMICS
So far, we have described the model in a discrete way: demon firing is all-ornone and mutually exclusive, corresponding to the demon units achieving a unary
representation. This may be the desired behavior following learning, but neural net
learning algorithms like back propagation require exploration in continuous state
and weight spaces and therefore need to allow partial activity of demon units. The
continuous activation dynamics follow.
Demon
di.ti
activity
(Bridle,
=
unit i computes the distance between its weights, Wi, and the input, x:
bi IWi - xl 2 , where bi is an adjustable bias associated with the unit. The
of unit i, denoted .i, is computed via a normalized exponential transform
1990j Rumelhart, in press),
e-di,ti
?i
=
L:i e-didj
,
which enforces a competition among the units. A special unit, called the default
unit, is designed to respond when none of the demons fire strongly. Its activity,
.del, is computed like that of any demon unit with di.tdel = bdel'
4
CONTINUOUS STACK
Because demon units can be partially active, stack operations need to be performed
partially. This can be accomplished with a continuou.s .stack (Giles et al., 1990).
Unlike a discrete stack where an item is either present or absent, items can be
present to varying degrees. Each item on the stack has an associated thickneu, a
scalar in the interval [0,1] indicating what fraction of the item is present (Figure 3).
To understand how the thickness plays a role in processing, we digress briefly and
explain the encoding of symbols. Both on the stack and in the network, symbols
are represented by numerical vectors that have one component per symbol. The
vector representation of some symbol X, denoted rx, has value 1 for the component
corresponding to X and 0 for all other components. H the symbol has thickness t,
the vector representation is trX'
Although items on the stack have different thicknesses, the network is presented
with compo.site .ymbol.s having thickness 1.0. Composite symbols are formed by
combining stack items. For example, in Figure 3, composite symbol 1 is defined as
the vector .2rX + .5rz + .3rv. The input to the demon network consists of the top
two composite symbols on the stack.
The advantages of a continuous stack are twofold. First, it is required for network
learningj if a discrete stack were used, a small change in weights could result in a big
(discrete) change in the stack. This was the motivation underlying the continuous
stack used by Giles et ale Second, the continuous stack is differentiable and hence
allows us to back propagate error through the stack during learning. While we have
summarized this point in one sentence, the reader must appreciate the fact that it
is no small feat! Giles et ale did not consider back propagation through the stack.
Each time step, the network performs two operations on the stack:
Connectionist Symbol Manipulator Discovers Structure of Context-Free Languages
top of stack
x
composite
symbol!
composite
symbol 2
Z
thickness
.2
.5
V
.4
X
.7
y
.4
Figure 3: A continuous stack. The symbols indicate the contentsj the height of
a stack entry indicates its thickness, also given by the number to the right. The
top composite symbol on the stack is a combination of the items forming a total
thickness of 1.0j the next composite symbol is a combination of the items making
up the next 1.0 units of thickness.
Pop. IT a demon unit fires, the top two composite symbols should be popped from
the stack (to be replaced by the demon's identity). If no demon unit fires, in which
case the default unit becomes active, the stack should remain unchanged. These
behaviors, as well as interpolated behaviors, are achieved by multiplying by 6deJ
the thickness of any portion of a stack item contributing to the top two composite
symbols. Remember that BdeJ is 0 when one or more demon units are strongly
active, and is 1 when the default unit is fully active.
Push. The symbol written onto the stack is the composite symbol formed by summing the identity vectors of the demon units, weighted by their activities: L:i 8iri,
where ri is the vector representing demon i's identity. Included in this summation
is the default unit, where rdeJ is defined to be the composite symbol over thickness
'deJ of the input queue. (After a thickness of BdcJ is read from the input queue, it
is removed from the queue.)
5
TRAINING METHODOLOGY
The system is trained on positive and negative examples of a context-free grammar.
Its task is to classify each input string as grammatical or not. Because the grammars
can always be written such that the root of the parse tree is the symbol S (e.g.,
Figure 2), the stack should contain just S upon completion of processing ofa positive
example. For a negative example, the stack should contain anything but s.
These criteria can be translated into an objective function as follows. If one assumes
a Gaussian noise distribution over outputs, the probability that the top of the stack
contains the symbol S following presentation of example i is
pi oot <X e- 1c,-rs I2 ,
where Ci is the vector representing the top composite symbol on the stackj and the
probability that the total thickness of the stack is 1 (i.e., the stack contains exactly
one item) is
867
868
Mozer and Das
where n is the total thickness of the stack and ~ is a constant. For a positive
example, the objective function should be greatest when there is a high probability
of S being on the stack and a high probability of it being the sole item on the
stackj for a negative example, the objective function should be greatest when either
event has a low probability. We thus obtain a likelihood objective function whose
logarithm the learning procedure attempts to maximize:
L=
IT
iEpos example
IT
iEneg example
Training sets were generated by hand, with a preference for shorter strings. Positive examples were generated from the grammarj negative examples were either
randomly generated or were formed by perturbing a grammatical string. In most
training sets, there were roughly 3-5 times as many negative examples as positive.
One might validly be concerned that we introduced some bias in our selection of
examples. Ifso, it was not deliberate. In the initial experiments reported below, our
goal was primarily to demonstrate that under some conditions, the network could
actually induce the grammar. In the next phase of our research, we plan a systematic investigation of the number and nature of examples required for successful
learning.
The total number of demon units and the (fixed) identity of each was specified
in advance of learning. For the grammar in Figure 2, we provided at least two
S demons and one X demon. Any number of demons beyond the minimum did
not affect performance. The initial weights {Wij} were selected from a uniform
distribution over the interval [.45, .55]. The bi were initialized to 1.0.
Before an example is presented, the stack is reset to contain only a single symbol, the
null symbol with vector representation 0 and infinite thickness. The example string
is placed in the input queue. The network is then allowed to run for 21-1 time steps,
which is exactly the number of steps required to process any grammatical string
of length I. One can intuit this fact by considering that it takes two operations to
process each symbol, one to transfer the symbol from the input queue to the stack,
and another to reduce the symbol.
The derivative of the objective function is computed with respect to the weight
parameters using a form of back propagation through time (Rumelhart, Hinton,
& Williams, 1986). This involves "unfolding" the architecture in time and back
propagating through the stack. Weights are then updated to perform gradient
ascent in the log likelihood function.
6
RESULTS AND DISCUSSION
We have successfully trained the architecture on a variety of grammars, including
those shown in Table 1. In each case, the network discriminates positive and negative examples perfectly on the training set. For the first three grammars, additional
(longer) strings were used to test network generalization performance. In each case,
generalization performance was 100%.
Connectionist Symbol Manipulator Discovers Structure of Context-Free Languages
s
s
a
x
a
x
Figure 4: Sample weights for anbn ? Weights are organized by demon unit, whose
identities appear above the rectangles. The top and bottom halves of the rectangle
represents connections from composite symbols 1 and 2, respectively. The darker
the shading is of a symbol in a rectangle, the larger the connection strength is from
the input unit representing that symbol to the demon unit. The weights clearly
indicate the three rewrite rules of the grammar.
Table 1: Grammars successfully learned by the demon model
I grammar name
I rewrite rulel
I
anbn
S--+ablaX
X--+Sb
parenthesis balancing
S --+ (J)
X SS
X--+S
s--+Yxlsx
x--+Y+ls+
Y--+alb
S --+ NP VP
NP --+ d NP2 NP2
NP2 --+ n an
VP --+ v NP
postfix
pseudo natural language
11 T
I
I
Due to the simplicity of the architecture-the fact that there is only one layer of
modifiable weights-the learned weights can often be interpreted as symbolic rewrite
rules (Figure 4). It is a remarkable achievement that the numerical optimization
framework of neural net learning can be used to discover symbolic rules (see also
Mozer &. Bachrach, 1991).
The first three grammars were successfully learned by the model of Giles et al.
(1990), although the analysis required to interpret the weights is generally more
cumbersome and tentative. The last grammar could not be learned by their model
(Das et al., 1992).
When more demon units are provided to the model than are required for the domain,
the weights tend to be less interpretable, but generalization performance is just as
good. (Of course, this result can hold for only a limited range of network sizes.)
The model also does well with very small training sets (e.g., three positive, three
negative examples for anbn ). This is no doubt because the architecture imposes
strong biases on the learning process. We performed some preliminary experiments
with staged training in which the length of strings in the training set was increased
gradually, allowing the model to first learn simple cases and then move on to more
difficult cases. This substantially improved the training time and robustness.
869
870
Mozer and Das
Although the current version of the model is designed for LR(O) context-free grammars, it can be extended to LR(n) by including connections from the first n composite symbols in the input queue to the demon units. However, our focus is not
necessarily on building the theoretically most powerful formal language recognizer
and learning systemj rather, our primary interest has been on integrating symbol
manipulation capabilities into a neural network architecture. In this regard, the
model makes a clear contribution. It has the ability represent a string of symbols with a single symbol, and to do so iteratively, allowing for the formation of
hierarchical and recursive structures. This is the essence of symbolic information
processing, and, in our view, a key ingredient necessary for structure learning.
Acknowledgements
This research was supported by NSF Presidential Young Investigator award IRI9058450 and grant 90-21 from the James S. McDonnell Foundation. Our thanks to
Paul Smolensky, Lee Giles, and J urgen Schmidhuber for helpful comments regarding
this work.
References
Bridle, J. (1990). Training stochastic model recognition algorithms as networks can lead to maximum
mutual information estimation of parameters. In D. S. Touretzk,. (Ed.), Adllancu in neural infor.
mation procelling .ydem. J (pp. 211-217). San Mateo, CA: Morgan Kaufmann.
Das, S., Giles, C. L., &t: Sun, G. Z. (1992). Learning context-free grammars: Capabilities and limitations of neural network with an external stack memorJ. In Proceeding. of the Fourteenth Annual
Conference of the Cognitille Science (pp. 791-795). Hillsdale, NJ: Erlbaum.
Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., &t: Chen, D. (1990). Higher order recurrent networks
and grammatical inference. In D. S. Tourebk,. (Ed.), Adllancu in neural information procelling
.y.tem. J (pp. 380-387). San Mateo, CA: Morgan Kaufmann.
Hinton, G. E. (1988). Representing part-whole hierarchies in connectionist networks. Proceeding. of
the Eighth Annual Conference of the Cognitille Science Society.
Mozer, M. C. (1992). The induction of multiscale temporal structure. In J. E. Mood,., S. J. Hanson, &.
R. P. Lippman (Eds.), Adllancu in neural information procelling .y.tem. IV (pp. 275-282). San
Mateo, CA: Morgan Kaufmann.
Mozer, M. C., &t: Bachrach, J. (1991). SLUG: A connectionist architecture for inferring the structure of
finite-state environments. Machine Learning, 7, 139-160.
Ring, M. (1993). Learning sequential tasks b,. incrementall,. adding higher orders. Thi. 1I0lume.
Rohwer, R. (1990). The 'moving targets' training algorithm. In D. S. Touretzk,. (Ed.), Adllance. in
neural informa.tion procelling .y.tem. J (pp. 558-565). San Mateo, CA: Morgan Kaufmann.
Rumelhart, D. E., Hinton, G. E., &t: Williams, R. J. (1986). Learning internal representations by error
propagation. In D. E. Rumelhart &t: J. L. McClelland (Eds.), Pa.rallel di.tributed procelling: E:z:ploration. in the microdructure of cognition. Volume I: Foundation. (pp. 318-362). Cambridge,
MA: MIT Press/Bradford Books.
Rumelhart, D. E. (in press). Connectionist processing and learning as statistical inference. In Y. Chauvin
&t: D. E. Rumelhart (Ed ?. ), Backpropagation: Theory, architecturu, and application ?. Hillsdale,
NJ: Erlbaum.
Schmidhuber, J. (1992). Learning unambiguous reduced sequence descriptions. In J. E. Moody, S. J.
Hanson, &t: R. P. Lippman (Eds.), Adllancu in neural information proceuing .y.tem. IV (pp.
291-298). San Mateo, CA: Morgan Kaufmann.
Stolcke, A., &t: Omohundro, S. (1993). Hidden markov model induction b,. Ba,.esian model merging.
Thi. 1Iolume.
Sun, G. Z., Chen, H. H., Giles, C. L., Lee, Y. C., &t: Chen, D. (1990). Connectionist pushdown automata
that learn context-Cree grammars. In Proceeding. of the International Joint Conference on Neural
Network, (pp. 1-577). Hillsdale, NJ: Erlbaum Associates.
| 626 |@word briefly:1 version:2 r:1 propagate:1 shading:1 reduction:2 initial:2 contains:2 current:2 activation:1 must:3 readily:1 parsing:2 written:2 numerical:2 designed:2 interpretable:1 half:1 selected:1 item:11 lr:6 compo:1 preference:1 attack:1 height:1 consists:1 theoretically:2 roughly:1 behavior:4 examine:1 considering:1 becomes:2 provided:2 discover:2 underlying:1 null:1 what:1 interpreted:1 string:20 substantially:1 transformation:2 nj:3 pseudo:1 remember:1 temporal:1 ti:2 ofa:1 exactly:2 control:1 unit:41 grant:1 appear:2 positive:8 before:1 iri9058450:1 encoding:1 proceuing:1 tributed:1 firing:3 might:1 studied:1 mateo:5 co:1 limited:1 bi:3 range:1 responsible:1 enforces:1 recursive:3 practice:1 backpropagation:1 lippman:2 procedure:1 thi:2 oot:1 composite:14 matching:2 induce:1 integrating:1 symbolic:6 onto:3 selection:1 esian:1 context:13 straightforward:1 iri:1 williams:2 l:1 automaton:3 bachrach:2 simplicity:1 rule:13 updated:1 hierarchy:1 play:1 colorado:1 target:1 pa:1 associate:1 rumelhart:6 recognition:1 continues:1 bottom:1 ft:1 role:1 fly:3 sun:5 removed:1 observes:1 mozer:9 discriminates:1 environment:1 dynamic:3 trained:3 rewrite:8 upon:1 ymbol:1 translated:1 joint:1 represented:1 digress:1 formation:1 whose:2 encoded:1 larger:1 s:1 slug:1 grammar:28 ability:1 presidential:1 anbn:4 transform:1 final:1 mood:1 sequence:6 advantage:1 differentiable:1 net:5 reset:1 blackboard:7 causing:1 combining:1 description:2 demon:49 competition:1 achievement:1 ring:2 illustrate:1 recurrent:1 completion:1 propagating:1 exemplar:1 sole:1 progress:1 strong:1 implemented:1 involves:1 indicate:2 stochastic:1 exploration:1 terribly:1 hillsdale:3 require:3 multilevel:2 generalization:3 investigation:1 preliminary:1 summation:1 hold:2 cognition:1 recognizer:2 estimation:1 successfully:3 weighted:1 unfolding:1 mit:1 clearly:1 always:1 gaussian:1 mation:1 rather:2 varying:1 encode:1 np2:3 focus:1 indicates:2 likelihood:2 detect:1 helpful:1 inference:3 unary:1 sb:1 entire:1 transferring:1 pad:1 hidden:1 wij:1 infor:1 sketched:1 among:2 denoted:2 plan:1 noun:3 special:2 touretzk:2 urgen:1 mutual:1 construct:1 having:2 represents:1 look:1 tem:4 connectionist:8 others:1 np:3 primarily:1 randomly:1 recognize:1 individual:1 replaced:2 phase:1 fire:6 attempt:1 interest:2 partial:1 necessary:1 shorter:1 tree:4 iv:2 logarithm:1 initialized:1 desired:1 rallel:1 increased:1 classify:1 giles:10 phrase:3 subset:1 entry:1 uniform:1 successful:1 erlbaum:3 reported:1 thickness:14 thanks:1 international:1 systematic:1 lee:3 michael:1 moody:1 containing:1 cognitive:2 external:2 book:1 derivative:1 style:1 leading:1 doubt:1 summarized:1 performed:2 root:1 view:1 tion:1 red:1 portion:1 capability:3 iwi:1 contribution:1 formed:3 kaufmann:5 cree:1 vp:2 substring:1 none:1 rx:2 multiplying:1 explain:2 cumbersome:1 ed:7 rohwer:2 against:1 pp:8 james:1 associated:3 di:4 ploration:1 bridle:2 knowledge:1 organized:1 sophisticated:1 actually:1 back:5 higher:2 follow:1 methodology:1 specify:2 improved:1 strongly:2 just:2 until:1 hand:4 receives:1 parse:5 replacing:1 multiscale:1 propagation:4 del:1 alb:1 manipulator:4 name:1 building:1 contain:4 normalized:1 hence:1 inspiration:1 read:3 iteratively:1 i2:1 ll:1 during:1 essence:1 unambiguous:1 anything:1 criterion:1 generalized:1 omohundro:1 demonstrate:1 performs:1 discovers:4 perturbing:1 volume:1 interpretation:1 interpret:1 significant:2 cambridge:1 language:9 moving:1 longer:1 own:1 recent:1 perspective:2 schmidhuber:3 claimed:1 certain:1 manipulation:1 accomplished:1 morgan:5 minimum:1 additional:2 recognized:1 maximize:1 ale:2 rv:1 multiple:2 sound:1 reduces:1 match:3 characterized:1 award:1 parenthesis:1 victory:1 essentially:1 represent:1 achieved:1 proposal:1 interval:2 allocated:1 unlike:1 ascent:1 comment:1 tend:1 call:1 extracting:1 stolcke:2 concerned:1 variety:1 affect:1 architecture:16 identified:1 perfectly:1 reduce:1 regarding:1 absent:1 effort:1 queue:10 useful:1 generally:1 clear:1 involve:1 amount:1 mcclelland:1 reduced:3 specifies:1 affords:1 deliberate:1 nsf:1 per:1 modifiable:2 write:1 discrete:4 express:2 key:1 dej:2 achieving:2 rectangle:3 fraction:1 run:1 fourteenth:1 letter:2 powerful:1 respond:3 reader:1 layer:2 simplification:1 copied:1 annual:2 activity:4 strength:1 ahead:1 ri:1 generates:2 interpolated:1 transferred:1 department:1 combination:2 mcdonnell:1 sreerupa:1 ate:1 remain:1 wi:1 making:1 gradually:2 boulder:1 mutually:1 previously:1 popped:1 staged:1 operation:5 hierarchical:3 robustness:1 rz:1 top:11 assumes:1 music:1 parsed:2 society:1 appreciate:1 unchanged:1 objective:5 move:1 primary:2 exclusive:1 gradient:1 distance:1 entity:1 chauvin:1 fresh:1 induction:2 length:2 procelling:5 difficult:2 spider:1 negative:8 ba:1 implementation:1 adjustable:1 perform:1 allowing:4 upper:1 markov:1 finite:1 defining:1 hinton:4 extended:2 looking:1 anb:1 postfix:1 stack:47 arbitrary:1 introduced:1 pair:2 required:5 specified:3 sentence:1 connection:3 tentative:1 hanson:2 learned:4 ornone:1 pop:4 beyond:1 below:1 perception:1 pattern:4 eighth:1 smolensky:1 including:2 memory:2 greatest:2 event:2 natural:2 representing:4 embodied:1 prior:2 understanding:1 acknowledgement:1 contributing:1 embedded:1 fully:1 limitation:2 proven:1 remarkable:1 ingredient:1 foundation:2 degree:1 imposes:2 pi:1 erasing:1 balancing:1 course:1 placed:1 last:1 free:12 supported:1 formal:2 side:3 allow:1 bias:3 institute:1 understand:1 template:1 grammatical:4 regard:1 default:6 lye:1 computes:1 made:1 san:5 simplified:1 far:1 feat:1 active:5 summing:1 continuous:8 table:2 promising:1 learn:4 contextfree:1 nature:1 transfer:1 ca:5 complex:2 necessarily:1 domain:2 da:8 did:2 big:1 motivation:1 noise:1 paul:1 whole:1 allowed:1 site:1 darker:1 inferring:1 exponential:1 xl:1 learns:1 young:1 specific:1 symbol:65 explored:1 uction:1 sequential:1 adding:1 merging:1 ci:1 push:5 chen:4 simply:1 forming:2 partially:2 scalar:1 watch:1 informa:1 ma:1 trx:1 identity:10 presentation:1 goal:1 consequently:1 intuit:1 twofold:1 content:1 change:2 pushdown:2 included:1 infinite:1 reducing:1 called:2 total:4 bradford:1 exception:1 indicating:1 internal:3 investigator:1 scratch:1 |
5,814 | 6,260 | x2
x1
100
50
40
OKM
OKM*
OKM*+LPE
OKM*+NPE
80
30
60
20
40
10
20
0
0
0.2
0.4
0.6
F?Score
0.8
1
0
0
OKM*
OKM*+LPE
OKM*+NPE
0.2
0.4
0.6
F?Score
0.8
1
| 6260 |@word lpe:2 score:2 okm:7 x2:1 npe:2 x1:1 |
5,815 | 6,261 | Visual Question Answering with
Question Representation Update (QRU)
Ruiyu Li
Jiaya Jia
The Chinese University of Hong Kong
{ryli,leojia}@cse.cuhk.edu.hk
Abstract
Our method aims at reasoning over natural language questions and visual images.
Given a natural language question about an image, our model updates the question
representation iteratively by selecting image regions relevant to the query and
learns to give the correct answer. Our model contains several reasoning layers,
exploiting complex visual relations in the visual question answering (VQA) task.
The proposed network is end-to-end trainable through back-propagation, where its
weights are initialized using pre-trained convolutional neural network (CNN) and
gated recurrent unit (GRU). Our method is evaluated on challenging datasets of
COCO-QA [19] and VQA [2] and yields state-of-the-art performance.
1
Introduction
Visual question answering (VQA) is a new research direction as intersection of computer vision and
natural language processing. Developing stable systems for VQA attracts increasing interests in
multiple communities. Possible applications include bidirectional image-sentence retrieval, human
computer interaction, blind person assistance, etc. It is now still a difficult problem due to many
challenges in visual object recognition and grounding, natural language representation, and common
sense reasoning.
Most recently proposed VQA models are based on image captioning [10, 24, 28]. These methods
have been advanced by the great success of deep learning on building language models [23], image
classification [12] and on visual object detection [6]. Compared with image captioning, where a
plausible description is produced for a given image, VQA requires algorithms to give the correct
answer to a specific human-raised question regarding the content of a given image. It is a more
complex research problem since the method is required to answer different types of questions.
An example related to image content is ?What is the color of the dog??. There are also
questions requiring extra knowledge or commonsense reasoning, such as ?Does it appear to be
rainy?".
Properly modeling questions is essential for solving the VQA problem. A commonly employed
strategy is to use a CNN or an RNN to extract semantic vectors. The general issue is that the resulting
question representation lacks detailed information from the given image, which however is vital
for understanding visual content. We take the question and image in Figure 1 as an example. To
answer the original question ?What is sitting amongst things have been abandoned?",
one needs to know the target object location. Thus the question can be more specific as ?What is
discarded on the side of a building near an old book shelf?".
In this paper, we propose a neural network based reasoning model that is able to update the question
representation iteratively by inferring image information. With this new system, it is now possible
to make questions more specific than the original ones focusing on important image information
automatically. Our approach is based on neural reasoner [18], which has recently shown remarkable
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Question: What is sitting
amongst things have been
abandoned?
Answer: Toilet.
Before:
What sits in the
room that appears to be
partially abandoned?
Updated: What is discarded
on the side of a building
near an old book shelf?
(a)
(b)
Figure 1: The questions asked by human can be ambiguous given an image containing various objects.
The Before and Updated questions are the most similar ones based on the cosine similarity to the
original Question before and after applying our algorithm to update representation. (b) shows the
attention masks generated by our model.
success in text question answering tasks. Neural reasoner updates the question by interacting it with
supporting facts through multiple reasoning layers. We note applying this model to VQA is nontrivial
since the facts are in the form of an image. Thus image region information is extracted in our
model. To determine the relevance between question and each image region, we employ the attention
mechanism to generate the attention distribution over regions of the image. Our contributions are as
follows.
? We present a reasoning network to iteratively update the question representation after each
time the question interacts with image content.
? Our model utilizes object proposals to obtain candidate image regions and has the ability to
focus on image regions relevant to the question.
We evaluate and compare the performance of our model on two challenging VQA datasets ? i.e.,
COCO-QA [19] and VQA [2]. Experiments demonstrate the ability of our model to infer image
regions relevant to the question.
2
Related Work
Research on visual question answering is mostly driven by text question answering and image
captioning methods. In natural language processing, question answering is a well-studied problem. In
[22], an end-to-end memory network was used with a recurrent attention model over a large external
memory. Compared with the original memory network, it has less supervision and shows comparable
results on the QA task. The neural reasoning system proposed in [18], named neural reasoner, can
utilize multiple supporting facts and find an answer. Decent performance was achieved on positional
reasoning and path finding QA tasks.
VQA is closely related to image captioning [10, 24, 28, 5]. In [5], a set of likely words are detected
in several regions of the image and are combined together using a language model to generate image
description. In [10], a structured max-margin objective was used for deep neural networks. It learns to
embed both visual and language data into a common multi-modal space. Vinyals et al. [24] extracted
high-level image feature vectors from CNN and took them as the first input to the recurrent network
to generate caption. Xu et al. [28] integrated visual attention in the recurrent network. The proposed
algorithm predicts one word at a time by looking at local image regions relevant to the currently
generated word.
Malinowski et al. [15] first introduced a solution addressing the VQA problem. It combines natural
language processing with semantic segmentation in a Bayesian framework for automatic question
answering. Since it, several neural network based models [16, 19, 2] were proposed to solve the
VQA problem. These models use CNN to extract image features and recurrent neural networks to
embed questions. The embedded image and question features are then fused by concatenation [16]
2
Image Understanding
Region 1
Image
Question
What are they playing?
1
Query
1
Query
GRU
Query
1
2
Query
1
SoftMax
Region M
Query
......
CNN
......
Region 2
1
M
0
Reasoning
Question Encoding
Answering
Figure 2: The overall architecture of our model with single reasoning layer for VQA.
or element-wise addition [29] to predict answers. Recently several models integrated the attention
mechanism [29, 27, 3, 20] and showed the ability of their networks to focus on image regions related
to the question.
There also exist other approaches for VQA. For example, Xiong et al. [26] proposed an improved
dynamic memory network to fuse the question and image region representations using bi-directional
GRU. The algorithm of [1] learns to compose a network from a collection of composable modules.
Ma et al. [14] made use of CNN and proposed a model with three CNNs to capture information of
the image, question and multi-modal representation.
3
Our Model
The overall architecture of our model is illustrated in Figure 2. The model is derived from the neural
reasoner [18], which is able to update the representation of question recursively by inferring over
multiple supporting facts. Our model yet contains a few inherently different components. Since
VQA involves only one question and one image each time instead of a set of facts, we use object
proposal to obtain candidate image regions serving as the facts in our model. Moreover, in the
pooling step, we employ an attention mechanism to determine the relevance between representation
of original questions and updated ones. Our network consists of four major components ? i.e., image
understanding, question encoding, reasoning and answering layers.
3.1
Image Understanding Layer
The image understanding layer is designed for modeling image content into semantic vectors. We
build this layer upon the VGG model with 19 weight layers [21]. It is pre-trained on ImageNet [4].
The network has sixteen convolutional layers and five max-pooling layers of kernel size 2 ? 2 with
stride 2, followed by two fully-connected layers with 4,096 neurons.
Using a global representation of the image may fail to capture all necessary information for answering
the question involving multiple objects and spatial configuration. Moreover, since most of the
questions are related to objects [19, 2], we utilize object proposal generator to produce a set of
candidate regions that are most likely to be an object. For each image, we choose candidate regions
by extracting the top 19 detected edge boxes [31]. We choose intersection over union (IoU) value 0.3
when performing non-maximum suppression, which is a common setting in object detection.
Additionally, the whole image region is added to capture the global information in the image
understanding layer, resulting in 20 candidate regions per image. We extract features from each
candidate region through the above mentioned CNN, bringing a dimension of 4,096 image region
features. The extracted features, however, lack spatial information for object location. To remedy this
issue, we follow the method of [8] to include an 8D representation
[xmin , ymin , xmax , ymax , xcenter , ycenter , wbox , hbox ],
3
where wbox and hbox are the width and height of the image region. We set the image center as the
origin. The coordinates are normalized to range from ?1 to 1. Then each image region is represented
as a 4104D feature denoted as fi where i ? [1, 20]. For modeling convenience, we use a single layer
perceptron to transform the image representation into a common latent space shared with the question
feature
vi = ?(Wvf ? fi + bvf ),
(1)
where ? is the rectified activation function ?(x) = max(0, x).
3.2
Question Encoding Layer
To encode the natural language question, we resort to the recurrent neural network, which has
demonstrated great success on sentence embedding. The question encoding layer is composed of a
word embedding layer and GRU cells. Given a question w = [w1 , ..., wT ], where wt is the tth word
in the question and T is the length of the question, we first embed each word wt to a vector space xt
with an embedding matrix xt = We wt . Then for each time step, we feed xt into GRU sequentially.
At each step, the GRU takes one input vector xt , and updates and outputs a hidden state ht . The final
hidden state hT is considered as the question representation. We also embed it into the common
latent space same as image embedding through a single layer perceptron
q = ?(Wqh ? hT + bqh ).
(2)
We utilize the pre-trained network with skip-thought vectors model [11] designed for general sentence
embedding to initialize our question encoding layer as used in [17]. Note that the skip-thought vectors
model is trained in an unsupervised manner on large language corpus. By fine-tuning the GRU, we
transfer knowledge from natural language corpus to the VQA problem.
3.3
Reasoning Layer
The reasoning layer includes question-image interaction and weighted pooling.
Question-Image Interaction Given that multilayer perceptron (MLP) has the ability to determine
the relationship between two input sentences according to supervision [7, 18]. We examine image
region features and question representation to acquire a good understanding of the question. In a
memory network [22], these image region features are akin to the input memory representation,
which can be retrieved for multiple times according to the question.
There are a total of L reasoning layers. In the lth reasoning layer, the ith interaction happens between
q l?1 and vi through an MLP, resulting in updated question representation qil as
qil = M LPl (q l?1 , vi ; ?l ),
(3)
with ?l being the model parameter of interaction at the lth reasoning layer. In the simplest case with
one single layer in M LPl , the updating process is given by
qil = ?(Wl ? (q l?1 ? vi ) + bl ),
(4)
where ? indicates element-wise multiplication, which performs better in our experiments than other
strategies, e.g., concatenation and element-wise addition.
Generally speaking, qil contains update of network focus towards answering the question after its
interaction with image feature vi . This property is important for the reasoning process [18].
Weighted Pooling Pooling aims to fuse components of the question after its interaction with all
image features to update representation. Two common strategies for pooling are max and mean
pooling. However, when answering a specifical question, it is often the case the correct answer is only
related to particular image regions. Therefore, using max pooling may lead to unsatisfying results
since questions may involve interaction between human and object, while mean pooling may also
cause inferior performance due to noise introduced by regions irrelevant to the question.
To determine the relevance between question and each image region, we resort to the attention
mechanism used in [28] to generate the attention distribution over image regions. For each updated
4
question qil after interaction with the ith image region, it is chosen close to the original question
representation q l?1 . Hence, the attention weights take the following forms.
Ci = tanh(WA ? qil ? (WB ? q l?1 + bB )),
P = sof tmax(WP ? C + bP ),
(5)
where C is a matrix and its ith column is Ci . P ? RM is a M dimensional vector representing the
attention weights. M is the number of image regions, set to 20. Based on the attention distribution,
we calculate weighted average of qil , resulting in the updated question representation q l as
X
ql =
Pi qil .
(6)
i
l
The updated question representation q after weighted pooling serves as the question input to the next
reasoning or answering layer.
3.4
Answering Layer
Following [19, 2], we model VQA as a classification problem with pre-defined classes. Given the
updated question representation at last reasoning layer q L , a softmax layer is employed to classify q L
into one of the possible answers as
pans = sof tmax(Wans ? q L + bans ).
(7)
Note instead of the softmax layer for predicting the correct answer, it is also possible to utilize LSTM
or GRU decoder, taking q L as input, to generate free-form answers.
4
4.1
Experiments
Datasets and Evaluation Metrics
We conduct experiments on COCO-QA [19] and VQA [2]. The COCO-QA dataset is based on
Microsoft COCO image data [13]. There are 78,736 training questions and 38,948 test ones, based
on a total of 123,287 images. Four types of questions are provided, including Object, Number, Color
and Location. Each type takes 70%, 7%, 17% and 6% of the whole dataset respectively.
In the VQA dataset, each image from the COCO data is annotated by Amazon Mechanical Turk
(AMT) with three questions. It is the largest for VQA benchmark so far. There are 248,349, 121,512
and 244,302 questions for training, validation and testing, respectively. For each question, ten answers
are provided to take consensus of annotators. Following [2], we choose the top 1,000 most frequent
answers as candidate outputs, which constitutes 82.67% of the train+val answers.
Since we formulate VQA as a classification problem, mean classification accuracy is used to evaluate
the model on the COCO-QA dataset. Besides, Wu-Palmer similarity (WUPS) [25] measure is also
reported on COCO-QA dataset. WUPS calculates similarity between two words based on their
longest common subsequence in the taxonomy tree. Following [19], we use thresholds 0.9 and 0.0 in
our evaluation. VQA dataset provides a different kind of evaluation metric. Since ten ground truth
answers are given, a predicted answer is considered to be correct when three or more ground truth
answers match it. Otherwise, partial score is given.
4.2
Implementation Details
We implement our network using the public Torch computing framework. Before training, all question
sentences are normalized to lower case where question marks are removed. These words are fed into
GRU one by one. The whole answer with one or more words is regarded as a separate class. For
extracting image features, each candidate region is cropped and resized to 224 ? 224 before feeding
into CNN.
For the COCO-QA dataset, we set the dimension of common latent space to 1,024. Since VQA
dataset is larger than COCO-QA, we double the dimension of common latent space to adapt the data
and classes. On each reasoning layer, we use one single layer in MLP. We test up to two reasoning
layers. No further improvement is observed when using three or more layers.
5
Methods
Mean Pooling
Max Pooling
W/O Global
W/O Coord
Full Model
ACC.
58.15
59.37
60.87
61.33
61.99
Object
60.61
62.11
63.32
63.76
64.53
Number
45.34
45.70
46.68
46.24
46.68
Color
55.37
55.91
58.66
59.35
59.81
Location
52.74
53.63
55.49
56.66
56.82
Table 1: Comparison of ablation models. Models are trained and tested on COCO-QA [19] with one
reasoning layer.
Methods
IMG+BOW [19]
2VIS+BLSTM [19]
Ensemble [19]
ABC-CNN [3]
DPPnet [17]
SAN [29]
QRU (1)
QRU (2)
ACC.
55.92
55.09
57.84
58.10
61.19
61.60
61.99
62.50
Object
58.66
58.17
61.08
62.46
64.50
64.53
65.06
Number
44.10
44.79
47.66
45.70
48.60
46.68
46.90
Color
51.96
49.53
51.48
46.81
57.90
59.81
60.50
Location
49.39
47.34
50.28
53.67
54.00
56.82
56.99
WUPS 0.9
66.78
65.34
67.90
68.44
70.84
71.60
71.83
72.58
WUPS 0.0
88.99
88.64
89.52
89.85
90.61
90.90
91.11
91.62
Table 2: Evaluation results on COCO-QA dataset [19]. ?QRU (1)? and ?QRU (2)? refer to 1 and 2
reasoning layers incorporated in the system.
The network is trained in an end-to-end fashion using stochastic gradient descent with mini-batches
of 100 samples and momentum 0.9. The learning rate starts from 10?3 and decreases by a factor of
10 when validation accuracy stops improving. We use dropout and gradient clipping to regularize the
training process. Our model is denoted as QRU in following experiments.
4.3
Ablation Results
We conduct experiments to exam the usefulness of each component in our model. Specifically, we
compare different question representation pooling mechanisms, i.e., mean pooling and max pooling.
We also train two controlled models devoid of global image feature and spatial coordinate, denoted
as W/O Global and W/O Coord. Table 1 shows the results.
The performance of mean and max pooling models are substantially worse than the full model, which
uses weighted pooling. This indicates that our model benefits from the attention mechanism by
looking at several image regions rather than only one or all of them. A drop of 1.12% in accuracy is
observed if the global image feature is not modeled, confirming that inclusion of the whole image
is important for capturing the global information. Without modeling spatial coordinates also leads
to a drop in accuracy. Notably, the greatest deterioration is on the question type of Object. This is
because the Object type seeks information around the object like ?What is next to the stop
sign?". Spatial coordinates help our model reason spatial relationship among objects.
4.4
Comparison with State-of-the-art
We compare performance in Tables 2 and 3 with experimental results on COCO-QA and VQA
respectively. Table 2 shows that our model with only one reasoning layer already outperforms
state-of-the-art 2-layer stacked attention network (SAN) [29]. Two reasoning layers give the best
performance. We also report the per-category accuracy to show the strength and weakness of our
model in Table 2. Our best model outperforms SAN by 2.6% and 2.99% in the question types of
Color and Location respectively, and by 0.56% in Object.
Our analysis is that the SAN model puts its attention on coarser regions obtained from the activation
of last convolutional layer, which may include cluttered and noisy background. In contrast, our model
only deals with selected object proposal regions, which have the good chance to be objects. When
answering questions involving objects, our model gives reasonable results. For the question type
Number, since an object proposal may contain several objects, our counting ability is weakened. In
fact, the counting task is a complete computer vision problem on its own.
6
Methods
BOWIMG [2]
LSTMIMG [2]
iBOWIMG [30]
DPPnet [17]
SAN [29]
WR Sel [20]
FDA [9]
DMN+ [26]
QRU (1)
QRU (2)
Open-Ended (test-dev)
All
Y/N
Num Other
52.64 75.77 33.67 37.37
53.74 78.94 35.24 36.42
55.72 76.55 35.03 42.62
57.22 80.71 37.24 41.71
58.70 79.30 36.60 46.10
59.24 81.14 36.16 45.77
60.37 80.75 37.00 48.25
59.26 80.98 35.93 45.99
60.72 82.29 37.02 47.67
test-std
All
54.06
55.89
57.36
58.90
59.54
60.36
59.44
60.76
Multiple-Choice (test-dev)
All
Y/N Num Other
58.97 75.59 34.35 50.33
57.17 78.95 35.80 43.41
61.68 76.68 37.05 54.44
62.48 80.79 38.94 52.16
62.44 77.62 34.28 55.84
64.01 81.50 39.00 54.72
63.96 81.00 37.08 55.48
65.43 82.24 38.69 57.12
test-std
All
57.57
61.97
62.69
62.43
64.18
64.13
65.43
Table 3: Evaluation results on VQA dataset [2]. ?QRU (1)? and ?QRU (2)? refer to 1 and 2 reasoning
layers incorporated in the system.
Original
Before updating
After updating
with one
reasoning layer
After updating
with two
reasoning layers
What next to two other open laptops?
What next to each other dipicting smartphones?
What next to two boys?
What hooked up to two computers?
What next to each other with visible piping?
What next to two pair of shoes?
What are there laying down with two remotes?
What next to each other depicting smartphones?
What hooked up to two computers?
What next to each other with monitors?
What cubicle with four differnet types of computers?
What plugged with wires?
What next to each other with monitors?
What are open at the table with cell phones?
What is next to the monitor?
What sits on the desk along with 2 monitors?
Figure 3: Retrieved questions before and after update from COCO-QA dataset [19].
Table 3 shows that our model yields prominent improvement on the Other type when compared
with other models [2, 30, 17] that use global representation of the image. Object proposals in our
model are useful since the Other type contains questions such as ?What color ? ? ? ", ?What kind
? ? ? ", ?Where is ? ? ? ", etc. Our model outperforms that of [20] by 3% where the latter also exploits
object proposals. Compared with [20], we use less number of object proposals, demonstrating the
effectiveness of our approach. This table also reveals that our model with two reasoning layers
achieve state-of-the-art results for both open-ended and multiple-choice tasks.
4.5
Qualitative Analysis
To understand the ability of our model in updating question representation, we show an image and
several questions in Figure 3. The retrieved questions from the test set are based on the cosine
similarities to the original question before and after our model updates the representation. It is notable
that before update, 4 out of the top 5 similar questions begin with ?What next". This is because GRU
acts as the language model, making the obtained questions share similar language structure. After we
update question representation, the resulting ones are more related to image content regarding objects
computers and monitors while the originally retrieved questions contain irrelevant words like boys
and shoes. The retrieved questions become even more informative using two reasoning layers.
We visualize a few attention masks generated by our model in Figure 4. Visualization is created
by soft masking the image with a mask created by summing weights of each region. The mask
is normalized with maximum value 1 followed by small Gaussian blur. Our model is capable of
putting attention on important regions closely relevant to the question. To answer the question ?What
is the color of the snowboard?", the proposed model finds the snowboard. For the other
question ?The man holding what on top of a snow covered hill?", it is required to infer
the relation among person, snow covered hill, and snowboard. With these attention masks, it is
possible to predict correct answers since irrelevant image regions are ruled out. More examples are
shown in Figure 5.
7
(a)
(b)
(c)
Q: What is the color of the Q: The man holding what on
snowboard?
top of a snow covered hill?
A: Yellow.
A: Snowboard.
Figure 4: Visualization of attention masks. Our model learns to attend particular image regions that
are relevant to the question.
Q: What is the color Q: What is sitting on Q: What is the man in Q: What are hogging
of the sunflower?
top of table in a
stadium style seats a bed by themselfs?
workshop?
using?
A: Yellow
A: Boat
A: Phone
A: Dogs
Q: What next to a
large building?
A: Clock
Figure 5: Visualization of more attention masks.
5
Conclusion
We have proposed an end-to-end trainable neural network for VQA. Our model learns to answer
questions by updating question representation and inferring over a set of image regions with multilayer
perceptron. Visualization of attention masks demonstrates the ability of our model to focus on image
regions highly related to questions. Experimental results are satisfying on the two challenging VQA
datasets. Future work includes improving object counting ability and word-region relation.
Acknowledgements
This work is supported by a grant from the Research Grants Council of the Hong Kong SAR (project
No. 2150760) and by the National Science Foundation China, under Grant 61133009. We thank
NVIDIA for providing Ruiyu Li a Tesla K40 GPU accelerator for this work.
8
References
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Learning to compose neural networks for question
answering. arXiv preprint arXiv:1601.01705, 2016.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual
question answering. In ICCV, pages 2425?2433, 2015.
[3] K. Chen, J. Wang, L.-C. Chen, H. Gao, W. Xu, and R. Nevatia. Abc-cnn: An attention based convolutional
neural network for visual question answering. arXiv preprint arXiv:1511.05960, 2015.
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, pages 248?255, 2009.
[5] H. Fang, S. Gupta, F. Iandola, R. K. Srivastava, L. Deng, P. Doll?r, J. Gao, X. He, M. Mitchell, J. C. Platt,
et al. From captions to visual concepts and back. In CVPR, pages 1473?1482, 2015.
[6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, pages 580?587, 2014.
[7] B. Hu, Z. Lu, H. Li, and Q. Chen. Convolutional neural network architectures for matching natural
language sentences. In NIPS, pages 2042?2050, 2014.
[8] R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, and T. Darrell. Natural language object retrieval. arXiv
preprint arXiv:1511.04164, 2015.
[9] I. Ilija, Y. Shuicheng, and F. Jiashi. A focused dynamic attention model for visual question answering.
arXiv preprint arXiv:1604.01485, 2016.
[10] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR,
pages 3128?3137, 2015.
[11] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. Skip-thought
vectors. In NIPS, pages 3276?3284, 2015.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, pages 1097?1105, 2012.
[13] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
coco: Common objects in context. In ECCV, pages 740?755, 2014.
[14] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network.
arXiv preprint arXiv:1506.00333, 2015.
[15] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based
on uncertain input. In NIPS, pages 1682?1690, 2014.
[16] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neurons: A neural-based approach to answering
questions about images. In ICCV, pages 1?9, 2015.
[17] H. Noh, P. H. Seo, and B. Han. Image question answering using convolutional neural network with
dynamic parameter prediction. arXiv preprint arXiv:1511.05756, 2015.
[18] B. Peng, Z. Lu, H. Li, and K.-F. Wong. Towards neural network-based reasoning. arXiv preprint
arXiv:1508.05508, 2015.
[19] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In NIPS, pages
2935?2943, 2015.
[20] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus regions for visual question answering. arXiv
preprint arXiv:1511.07394, 2015.
[21] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[22] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. Weakly supervised memory networks. arXiv preprint
arXiv:1503.08895, 2015.
[23] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. arXiv preprint
arXiv:1409.3215, 2014.
[24] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In
CVPR, pages 3156?3164, 2015.
[25] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In ACL, pages 133?138, 1994.
[26] C. Xiong, S. Merity, and R. Socher. Dynamic memory networks for visual and textual question answering.
arXiv preprint arXiv:1603.01417, 2016.
[27] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual
question answering. arXiv preprint arXiv:1511.05234, 2015.
[28] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell:
Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015.
[29] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering.
arXiv preprint arXiv:1511.02274, 2015.
[30] B. Zhou, Y. Tian, S. Sukhbaatar, A. Szlam, and R. Fergus. Simple baseline for visual question answering.
arXiv preprint arXiv:1512.02167, 2015.
[31] C. L. Zitnick and P. Doll?r. Edge boxes: Locating object proposals from edges. In ECCV, pages 391?405,
2014.
9
| 6261 |@word kong:2 cnn:10 open:4 hu:2 shuicheng:1 seek:1 recursively:1 configuration:1 contains:4 score:1 selecting:1 hoiem:1 outperforms:3 activation:2 yet:1 gpu:1 visible:1 blur:1 informative:1 confirming:1 designed:2 drop:2 update:14 sukhbaatar:2 selected:1 ith:3 num:2 provides:1 cse:1 location:6 sits:2 five:1 height:1 along:1 become:1 qualitative:1 consists:1 combine:1 compose:2 manner:1 peng:1 notably:1 mask:8 merity:1 examine:1 kiros:3 multi:3 salakhutdinov:2 automatically:1 increasing:1 spain:1 provided:2 moreover:2 begin:1 project:1 laptop:1 what:36 kind:2 substantially:1 finding:1 ended:2 qru:10 act:1 rm:1 demonstrates:1 platt:1 szlam:2 unit:1 grant:3 ramanan:1 appear:1 before:9 attend:3 local:1 encoding:5 path:1 tmax:2 acl:1 coord:2 studied:1 weakened:1 china:1 challenging:3 palmer:2 bi:1 range:1 tian:1 testing:1 union:1 implement:1 maire:1 dmn:1 rnn:1 thought:3 matching:1 pre:4 word:11 convenience:1 close:1 selection:1 put:1 context:1 applying:2 wong:1 demonstrated:1 center:1 lexical:1 attention:26 cluttered:1 focused:1 formulate:1 amazon:1 seat:1 regarded:1 regularize:1 fang:1 embedding:5 coordinate:4 sar:1 updated:8 target:1 hierarchy:1 caption:4 us:1 origin:1 wups:4 element:3 recognition:2 satisfying:1 updating:6 std:2 predicts:1 qil:8 coarser:1 observed:2 database:1 module:1 preprint:16 wang:1 capture:3 calculate:1 region:43 connected:1 snowboard:5 k40:1 remote:1 decrease:1 removed:1 xmin:1 mentioned:1 asked:1 dynamic:4 trained:6 singh:1 solving:1 weakly:1 upon:1 toilet:1 various:1 represented:1 hooked:2 train:2 stacked:2 query:6 detected:2 zemel:3 tell:2 larger:1 plausible:1 solve:1 cvpr:5 otherwise:1 ability:8 simonyan:1 transform:1 noisy:1 final:1 agrawal:1 sequence:2 took:1 propose:1 interaction:9 frequent:1 relevant:6 ablation:2 wbox:2 bow:1 ymax:1 achieve:1 lpl:2 description:3 bed:1 exploiting:1 sutskever:2 double:1 darrell:3 captioning:4 produce:1 generating:1 object:35 help:1 recurrent:6 exam:1 predicted:1 involves:1 skip:3 iou:1 direction:1 snow:3 guided:1 closely:2 correct:6 annotated:1 cnns:1 stochastic:1 human:4 public:1 feeding:1 exploring:2 around:1 considered:2 ground:2 great:2 lawrence:1 predict:2 visualize:1 major:1 torralba:1 currently:1 tanh:1 seo:1 council:1 largest:1 wl:1 weighted:5 gaussian:1 aim:2 rather:1 zhou:1 shelf:2 resized:1 sel:1 encode:1 derived:1 focus:5 properly:1 longest:1 improvement:2 indicates:2 hk:1 contrast:1 suppression:1 baseline:1 sense:1 integrated:2 torch:1 hidden:2 relation:3 perona:1 semantics:1 issue:2 classification:5 overall:2 among:2 denoted:3 noh:1 art:4 raised:1 softmax:3 spatial:7 initialize:1 look:1 unsupervised:1 constitutes:1 future:1 report:1 employ:2 few:2 rainy:1 composed:1 national:1 microsoft:2 detection:3 interest:1 mlp:3 highly:1 evaluation:5 alignment:1 weakness:1 antol:1 accurate:1 commonsense:1 edge:3 capable:1 partial:1 necessary:1 conduct:2 tree:1 old:2 plugged:1 initialized:1 ruled:1 girshick:1 uncertain:1 column:1 modeling:4 wb:1 classify:1 dev:2 soft:1 clipping:1 addressing:1 usefulness:1 krizhevsky:1 jiashi:1 reported:1 answer:23 combined:1 person:2 devoid:1 lstm:1 fritz:2 dong:1 together:1 fused:1 w1:1 containing:1 choose:3 wan:1 worse:1 external:1 book:2 resort:2 style:1 nevatia:1 li:7 stride:1 includes:2 unsatisfying:1 notable:1 blind:1 vi:6 start:1 masking:1 jia:1 contribution:1 hbox:2 accuracy:5 convolutional:9 ensemble:1 yield:2 sitting:3 directional:1 yellow:2 bayesian:1 produced:1 lu:4 ren:1 rectified:1 acc:2 turk:1 stop:2 dataset:11 ask:2 mitchell:2 color:9 knowledge:2 segmentation:2 back:2 focusing:1 bidirectional:1 appears:1 feed:1 originally:1 supervised:1 follow:1 modal:2 improved:1 zisserman:1 evaluated:1 box:2 smola:1 clock:1 propagation:1 lack:2 grounding:1 building:4 requiring:1 normalized:3 remedy:1 contain:2 concept:1 hence:1 fidler:1 iteratively:3 wp:1 semantic:5 illustrated:1 deal:1 assistance:1 width:1 inferior:1 ambiguous:1 cosine:2 hong:2 prominent:1 hill:3 complete:1 demonstrate:1 performs:1 reasoning:32 image:90 wise:3 recently:3 fi:2 parikh:1 common:10 he:2 refer:2 automatic:1 tuning:1 inclusion:1 language:16 stable:1 han:1 jiaya:1 similarity:4 supervision:2 etc:2 own:1 showed:1 retrieved:5 irrelevant:3 driven:1 coco:15 phone:2 nvidia:1 hay:1 success:3 employed:2 deng:3 determine:4 cuhk:1 multiple:8 full:2 infer:2 blstm:1 match:1 adapt:1 retrieval:2 lin:1 controlled:1 calculates:1 prediction:1 involving:2 multilayer:2 vision:2 metric:2 arxiv:32 kernel:1 sof:2 xmax:1 deterioration:1 achieved:1 cell:2 proposal:9 addition:2 cropped:1 fine:1 background:1 extra:1 bringing:1 pooling:17 thing:2 effectiveness:1 extracting:2 near:2 counting:3 yang:1 vital:1 bengio:2 decent:1 attracts:1 architecture:3 andreas:1 regarding:2 vgg:1 sunflower:1 akin:1 locating:1 speaking:1 cause:1 deep:5 generally:1 useful:1 detailed:1 malinowski:3 involve:1 vqa:29 covered:3 stadium:1 karpathy:1 desk:1 ten:2 category:1 tth:1 simplest:1 generate:5 exist:1 sign:1 per:2 wr:1 klein:1 serving:1 putting:1 four:3 shih:1 threshold:1 demonstrating:1 monitor:5 ht:3 utilize:4 fuse:2 named:1 reasonable:1 wu:2 utilizes:1 comparable:1 dropout:1 layer:44 capturing:1 followed:2 courville:1 nontrivial:1 strength:1 reasoner:4 fei:4 bp:1 scene:1 fda:1 your:1 toshev:1 performing:1 structured:1 developing:1 according:2 pan:1 making:1 happens:1 iccv:2 visualization:4 dppnet:2 mechanism:6 fail:1 know:1 fed:1 end:8 serf:1 doll:3 hierarchical:1 xiong:2 batch:1 original:8 abandoned:3 top:6 include:3 exploit:1 chinese:1 build:1 bl:1 feng:1 objective:1 malik:1 question:117 added:1 already:1 strategy:3 interacts:1 amongst:2 gradient:2 separate:1 thank:1 concatenation:2 decoder:1 consensus:1 urtasun:1 reason:1 laying:1 length:1 besides:1 modeled:1 relationship:2 mini:1 providing:1 acquire:1 difficult:1 mostly:1 ql:1 taxonomy:1 boy:2 holding:2 ba:1 implementation:1 gated:1 neuron:2 wire:1 datasets:4 discarded:2 benchmark:1 descent:1 supporting:3 hinton:1 looking:2 incorporated:2 interacting:1 verb:1 community:1 introduced:2 dog:2 gru:10 required:2 mechanical:1 sentence:6 imagenet:3 smartphones:2 pair:1 trainable:2 textual:1 barcelona:1 nip:6 qa:14 able:2 challenge:1 max:8 memory:8 including:1 greatest:1 natural:10 predicting:1 boat:1 advanced:1 zhu:1 representing:1 ymin:1 created:2 extract:3 text:2 understanding:7 acknowledgement:1 val:1 multiplication:1 embedded:1 fully:1 accelerator:1 generation:1 composable:1 remarkable:1 sixteen:1 generator:2 validation:2 annotator:1 foundation:1 playing:1 pi:1 share:1 eccv:2 ban:1 supported:1 last:2 free:1 side:2 understand:1 perceptron:4 taking:1 benefit:1 dimension:3 world:2 rich:1 commonly:1 collection:1 made:1 san:5 far:1 erhan:1 bb:1 global:8 sequentially:1 reveals:1 corpus:2 img:1 summing:1 belongie:1 fergus:2 subsequence:1 latent:4 table:11 additionally:1 ilija:1 transfer:1 inherently:1 depicting:1 improving:2 complex:2 zitnick:3 whole:4 noise:1 tesla:1 xu:5 fashion:1 inferring:3 momentum:1 candidate:8 answering:29 learns:5 donahue:1 down:1 embed:4 specific:3 xt:4 gupta:1 essential:1 workshop:1 socher:2 ci:2 margin:1 chen:3 intersection:2 bqh:1 likely:2 rohrbach:3 shoe:2 visual:21 positional:1 vinyals:3 gao:3 iandola:1 partially:1 truth:2 chance:1 amt:1 extracted:3 ma:2 abc:2 weston:1 lth:2 towards:2 room:1 shared:1 man:3 content:6 specifically:1 wt:4 total:2 batra:1 experimental:2 saenko:2 mark:1 latter:1 relevance:3 evaluate:2 tested:1 srivastava:1 |
5,816 | 6,262 | Adaptive Newton Method for Empirical Risk
Minimization to Statistical Accuracy
Aryan Mokhtari?
University of Pennsylvania
[email protected]
Hadi Daneshmand?
ETH Zurich, Switzerland
[email protected]
Thomas Hofmann
ETH Zurich, Switzerland
[email protected]
Aurelien Lucchi
ETH Zurich, Switzerland
[email protected]
Alejandro Ribeiro
University of Pennsylvania
[email protected]
Abstract
We consider empirical risk minimization for large-scale datasets. We introduce
Ada Newton as an adaptive algorithm that uses Newton?s method with adaptive
sample sizes. The main idea of Ada Newton is to increase the size of the training set by a factor larger than one in a way that the minimization variable for
the current training set is in the local neighborhood of the optimal argument of
the next training set. This allows to exploit the quadratic convergence property
of Newton?s method and reach the statistical accuracy of each training set with
only one iteration of Newton?s method. We show theoretically that we can iteratively increase the sample size while applying single Newton iterations without
line search and staying within the statistical accuracy of the regularized empirical
risk. In particular, we can double the size of the training set in each iteration when
the number of samples is sufficiently large. Numerical experiments on various
datasets confirm the possibility of increasing the sample size by factor 2 at each
iteration which implies that Ada Newton achieves the statistical accuracy of the
full training set with about two passes over the dataset.1
1
Introduction
A hallmark of empirical risk minimization (ERM) on large datasets is that evaluating descent directions requires a complete pass over the dataset. Since this is undesirable due to the large number of
training samples, stochastic optimization algorithms with descent directions estimated from a subset
of samples are the method of choice. First order stochastic optimization has a long history [19, 17]
but the last decade has seen fundamental progress in developing alternatives with faster convergence.
A partial list of this consequential literature includes Nesterov acceleration [16, 2], stochastic averaging gradient [20, 6], variance reduction [10, 26], and dual coordinate methods [23, 24].
When it comes to stochastic second order methods the first challenge is that while evaluation of
Hessians is as costly as evaluation of gradients, the stochastic estimation of Hessians has proven
more challenging. This difficulty is addressed by incremental computations in [9] and subsampling
in [7] or circumvented altogether in stochastic quasi-Newton methods [21, 12, 13, 11, 14]. Despite
this incipient progress it is nonetheless fair to say that the striking success in developing stochastic
first order methods is not matched by equal success in the development of stochastic second order
methods. This is because even if the problem of estimating a Hessian is solved there are still four
challenges left in the implementation of Newton-like methods in ERM:
1?
The first two authors have contributed equally in this work.
(i) Global convergence of Newton?s method requires implementation of a line search subroutine and line searches in ERM require a complete pass over the dataset.
(ii) The quadratic convergence advantage of Newton?s method manifests close to the optimal
solution but there is no point in solving ERM problems beyond their statistical accuracy.
(iii) Newton?s method works for strongly convex functions but loss functions are not strongly
convex for many ERM problems of practical importance.
(iv) Newton?s method requires inversion of Hessians which is costly in large dimensional ERM.
Because stochastic Newton-like methods can?t use line searches [cf. (i)], must work on problems
that may be not strongly convex [cf. (iii)], and never operate very close to the optimal solution [cf
(ii)], they never experience quadratic convergence. They do improve convergence constants and, if
efforts are taken to mitigate the cost of inverting Hessians [cf. (iv)] as in [21, 12, 7, 18] they result
in faster convergence. But since they still converge at linear rates they do not enjoy the foremost
benefits of Newton?s method.
In this paper we attempt to circumvent (i)-(iv) with the Ada Newton algorithm that combines the use
of Newton iterations with adaptive sample sizes [5]. Say the total number of available samples is
N , consider subsets of n ? N samples, and suppose the statistical accuracy of the ERM associated
with n samples is Vn (Section 2). In Ada Newton we add a quadratic regularization term of order Vn
to the empirical risk ? so that the regularized risk also has statistical accuracy Vn ? and assume that
for a certain initial sample size m0 , the problem has been solved to its statistical accuracy Vm0 . The
sample size is then increased by a factor ? > 1 to n = ?m0 . We proceed to perform a single Newton
iteration with unit stepsize and prove that the result of this update solves this extended ERM problem
to its statistical accuracy (Section 3). This permits a second increase of the sample size by a factor
? and a second Newton iteration that is likewise guaranteed to solve the problem to its statistical
accuracy. Overall, this permits minimizing the empirical risk in ?/(? 1) passes over the dataset
and inverting log? N Hessians. Our theoretical results provide a characterization of the values of
? that are admissible with respect to different problem parameters (Theorem 1). In particular, we
show that asymptotically on the number of samples n and with proper parameter selection we can set
? = 2 (Proposition 2). In such case we can optimize to within statistical accuracy in about 2 passes
over the dataset and after inversion of about 3.32 log10 N Hessians. Our numerical experiments
verify that ? = 2 is a valid factor for increasing the size of the training set at each iteration while
performing a single Newton iteration for each value of the sample size.
2
Empirical risk minimization
We aim to solve ERM problems to their statistical accuracy. To state this problem formally consider
an argument w 2 Rp , a random variable Z with realizations z and a convex loss function f (w; z).
We want to find an argument w? that minimizes the statistical average loss L(w) := EZ [f (w, Z)],
w? := argmin L(w) = argmin EZ [f (w, Z)].
w
(1)
w
The loss in (1) can?t be evaluated because the distribution of Z is unknown. We have, however,
access to a training set T = {z1 , . . . , zN } containing N independent samples z1 , . . . , zN that we
can use to estimate L(w). We therefore
consider a subset Sn ? T and settle for minimization of the
Pn
empirical risk Ln (w) := (1/n) k=1 f (w, zk ),
n
wn? := argmin Ln (w) = argmin
w
w
1X
f (w, zk ),
n
(2)
w.h.p.
(3)
k=1
where, without loss of generality, we have assumed Sn = {z1 , . . . , zn } contains the first n elements
of T . The difference between the empirical risk in (2) and the statistical loss in (1) is a fundamental
quantities in statistical learning. We assume here that there exists a constant Vn , which depends on
the number of samples n, that upper bounds their difference for all w with high probability (w.h.p),
sup |L(w)
w
Ln (w)| ? Vn ,
That the statement in (3) holds with w.h.p means that there exists a constant such that the inequality
holds with probability at least 1
. The constant Vn depends on but we keep that dependency
2
implicit
p to simplify notation. For subsequent discussions, observe that bounds Vn of order Vn =
O(1/ n) date back to the seminal work of Vapnik ? see e.g., [25, Section 3.4]. Bounds of order
Vn = O(1/n) have been derived more recently under stronger regularity conditions that are not
uncommon in practice, [1, 8, 3].
An important consequence of (1) is that there is no point in solving (2) to an accuracy higher than Vn .
Indeed, if we find a variable w for which Ln (wn ) Ln (w? ) ? Vn finding a better approximation of
w? is moot because (3) implies that this is not necessarily a better approximation of the minimizer
w? of the statistical loss. We say the variable wn solves the ERM problem in (2) to within its
statistical accuracy. In particular, this implies that adding a regularization of order Vn to (2) yields
a problem that is essentially equivalent. We can then consider a quadratic regularizer of the form
cVn /2kwk2 to define the regularized empirical risk Rn (w) := Ln (w) + (cVn /2)kwk2 and the
corresponding optimal argument
wn? := argmin Rn (w) = argmin Ln (w) +
w
w
cVn
kwk2 .
2
(4)
Since the regularization in (4) is of order Vn and (3) holds, the difference between Rn (wn? ) and
L(w? ) is also of order Vn ? this may be not as immediate as it seems; see [22]. Thus, we can
say that a variable wn satisfying Rn (wn ) Rn (wn? ) ? Vn solves the ERM problem to within its
statistical accuracy. We accomplish this goal in this paper with the Ada Newton algorithm which we
introduce in the following section.
3
Ada Newton
To solve (4) suppose the problem has been solved to within its statistical accuracy for a set Sm ?
Sn with m = n/? samples where ? > 1. Therefore, we have found a variable wm for which
?
Rm (wm ) Rm (wm
) ? Vm . Our goal is to update wm using the Newton step in a way that the
updated variable wn estimates wn? with accuracy Vn . To do so compute the gradient of the risk Rn
evaluated at wm
n
rRn (wm ) =
1X
rf (wm , zk ) + cVn wm ,
n
(5)
k=1
as well as the Hessian Hn of Rn evaluated at wm
n
Hn := r2 Rn (wm ) =
1X 2
r f (wm , zk ) + cVn I,
n
(6)
k=1
and update wm with the Newton step of the regularized risk Rn to compute
wn = wm
Hn 1 rRn (wm ).
(7)
Note that the stepsize of the Newton update in (7) is 1, which avoids line search algorithms requiring
extra computation. The main contribution of this paper is to derive a condition that guarantees that
wn solves Rn to within its statistical accuracy Vn . To do so, we first assume the following conditions
are satisfied.
Assumption 1. The loss functions f (w, z) are convex with respect to w for all values of z. Moreover, their gradients rf (w, z) are Lipschitz continuous with constant M
krf (w, z)
rf (w0 , z)k ? M kw
w0 k,
for all z.
(8)
Assumption 2. The loss functions f (w, z) are self-concordant with respect to w for all z.
Assumption 3. The difference between the gradients of the empirical loss Ln and the statistical
1/2
average loss L is bounded by Vn for all w with high probability,
sup krL(w)
w
rLn (w)k ? Vn1/2 ,
w.h.p.
(9)
The conditions in Assumption 1 imply that the average loss L(w) and the empirical loss Ln (w)
are convex and their gradients are Lipschitz continuous with constant M . Thus, the empirical risk
3
Algorithm 1 Ada Newton
1: Parameters: Sample size increase constants ?0 > 1 and 0 < < 1.
p
2: Input: Initial sample size n = m0 and argument wn = wm0 with krRn (wn )k < ( 2c)Vn
3: while n ? N do {main loop}
4:
Update argument and index: wm = wn and m = n. Reset factor ? = ?0 .
5:
repeat {sample size backtracking loop}
6:
Increase sample size: n = min{?m, N }.
Pn
7:
Compute gradient [cf. (5)]: rRn (wm ) = (1/n) k=1 rf (wm , zk ) + cVn wm
Pn
8:
Compute Hessian [cf. (6)]: Hn = (1/n) k=1 r2 f (wm , zk ) + cVn I
9:
Newton Update [cf. (7)]: wn = wm Hn 1 rRn (wm )
Pn
10:
Compute gradient [cf. (5)]: rRn (wn ) = (1/n) k=1 rf (wn , zk ) + cVn wn
11:
Backtrack sample size increase ? = ?.
p
12:
until krRn (wn )k < ( 2c)Vn
13: end while
Rn (w) is strongly convex with constant cVn and its gradients rRn (w) are Lipschitz continuous
with parameter M + cVn . Likewise, the condition in Assumption 2 implies that the average loss
L(w), the empirical loss Ln (w), and the empirical risk Rn (w) are also self-concordant. The condition in Assumption 3 says that the gradients of the empirical risk converge to their statistical average
1/2
at a rate of order Vn . If the constant Vn in condition (3) is of order not faster than O(1/n) the
condition in Assumption 3 holds if the gradients converge to their statistical average at a rate of
p
1/2
order Vn = O(1/ n). This is a conservative rate for the law of large numbers.
In the following theorem, given Assumptions 1-3, we state a condition that guarantees the variable
wn evaluated as in (7) solves Rn to within its statistical accuracy Vn .
Theorem 1. Consider the variable wm as a Vm -optimal solution of the risk Rm , i.e., a solution
?
such that Rm (wm ) Rm (wm
) ? Vm . Let n = ?m > m, consider the risk Rn associated with
sample set Sn Sm , and suppose assumptions 1 - 3 hold. If the sample size n is chosen such that
p
?
?1/2
(2 + 2)c1/2 + ckw? k (Vm Vn )
2(M + cVm )Vm
2(n m)
1
+
+
?
(10)
1/2
1/2
cVn
4
nc
(cVn )
and
?
144 Vm +
2(n
m)
n
(Vn
m
+ Vm ) + 2 (Vm
Vn ) +
c(Vm
Vn )
2
? 2
kw k
?2
? Vn
(11)
are satisfied, then the variable wn , which is the outcome of applying one Newton step on the variable
wm as in (7), has sub-optimality error Vn with high probability, i.e.,
Rn (wn )
Rn (wn? ) ? Vn ,
w.h.p.
(12)
Proof. See Section 4.
Theorem 1 states conditions under which we can iteratively increase the sample size while applying
single Newton iterations without line search and staying within the statistical accuracy of the regularized empirical risk. The constants in (10) and (11) are not easy to parse but we can understand
them qualitatively if we focus on large m. This results in a simpler condition that we state next.
Proposition 2. Consider a learning problem in which the statistical accuracy satisfies Vm ? ?Vn
for n = ?m and limn!1 Vn = 0. If the regularization constant c is chosen so that
?
?1/2
2?M
2(? 1)
1
+
< ,
(13)
c
4
?c1/2
then, there exists a sample size m
? such that (10) and (11) are satisfied
m>m
? and n = ?m.
p for all
2
In particular, if ? = 2 we can satisfy (10) and (11) with c > 16(2 M + 1) .
4
Proof. That the condition in (11) is satisfied for all m > m
? follows simply because the left hand side
is of order Vm2 and the right hand side is of order Vn . To show that the condition in (10) is satisfied
1/2
for sufficiently large m observe that the third summand in (10) is of order O((Vm Vn )/Vn )
and vanishes for large m. In the second summand of (10) we make n = ?m to obtain the second
summand in (13) and in the first summand replace the ratio Vm /Vn by its bound ? to obtain the first
summand of (13). To conclude the proof just observe that the inequality in (13) is strict.
p
The
p condition Vm ? ?Vn is satisfied if Vn = 1/n and is also satisfied if Vn = 1/ n because
? < ?. This means that for most ERM problems we can progress geometrically over the sample
size and arrive at a solution wN that solves the ERM problem RN to its statistical accuracy VN as
long as (13) is satisfied .
The result in Theorem 1 motivates definition of the Ada Newton algorithm that we summarize in
Algorithm 1. The core of the algorithm is in steps 6-9. Step 6 implements an increase in the
sample size by a factor ? and steps 7-9 implement the Newton iteration in (5)-(7). The required
input to the algorithm is an initial sample size m0 and a variable wm0 that is known to solve the
ERM problem with accuracy Vm0 . Observe that this initial iterate doesn?t have to be computed
with Newton iterations. The initial problem to be solved contains a moderate number of samples
m0 , a mild condition number because it is regularized with constant cVm0 , and is to be solved
p to a
moderate accuracy Vm0 ? recall that Vm0 is of order Vm0 = O(1/m0 ) or order Vm0 = O(1/ m0 )
depending on regularity assumptions. Stochastic first order methods excel at solving problems with
moderate number of samples m0 and moderate condition to moderate accuracy.
We remark that the conditions in Theorem 1 and Proposition 2 are conceptual but that the constants
involved are unknown in practice. In particular, this means that the allowed values of the factor ? that
controls the growth of the sample size are unknown a priori. We solve this problem in Algorithm 1
by backtracking the increase in the sample size until we guarantee that wn minimizes the empirical
risk Rn (wn ) to within its statistical accuracy. This backtracking of the sample size is implemented
in Step 11 and the optimality condition of wn is checked in Step 12. The condition in Step 12 is
on the gradient norm that, because Rn is strongly convex, can be used to bound the suboptimality
Rn (wn ) Rn (wn? ) as
Rn (wn )
Rn (wn? ) ?
1
krRn (wn )k2 .
2cVn
(14)
Observe that checking this condition requires an extra gradient computation undertaken in Step
10. That computation can be reused in the computation of the gradient in Step 5 once we exit
the backtracking loop. We emphasize that when the condition in (13) is satisfied, there exists a
sufficiently large m for which the conditions in Theorem 1 are satisfied for n = ?m. This means
that the backtracking condition in Step 12 is satisfied after one iteration and that, eventually, Ada
Newton progresses by increasing the sample size by a factor ?. This means that Algorithm 1 can
be thought of as having a damped phase where the sample size increases by a factor smaller than ?
and a geometric phase where the sample size grows by a factor ? in all subsequent iterations. The
computational cost of this geometric phase is of not more than ?/(?p 1) passes over the dataset
and requires inverting not more than log? N Hessians. If c > 16(2 M + 1)2 , we make ? = 2
for optimizing to within statistical accuracy in about 2 passes over the dataset and after inversion of
about 3.32 log10 N Hessians.
4
Convergence Analysis
In this section we study the proof of Theorem 1. The main idea of the Ada Newton algorithm is
introducing a policy for increasing the size of training set from m to n in a way that the current
variable wm is in the Newton quadratic convergence phase for the next regularized empirical risk
Rn . In the following proposition, we characterize the required condition to guarantee staying in the
local neighborhood of Newton?s method.
Proposition 3. Consider the sets Sm and Sn as subsets of the training set T such that Sm ? Sn ?
T . We assume that the number of samples in the sets Sm and Sn are m and n, respectively. Further,
?
define wm as an Vm optimal solution of the risk Rm , i.e., Rm (wm ) Rm (wm
) ? Vm . In addition,
1/2
T 2
1
define n (w) := rRn (w) r Rn (w) rRn (w)
as the Newton decrement of variable w
5
associated with the risk Rn . If Assumption 1-3 hold, then Newton?s method at point wm is in the
quadratic convergence phase for the objective function Rn , i.e., n (wm ) < 1/4, if we have
p
?
?1/2
p
1/2
2(M + cVm )Vm
(2(n m)/n)Vn + ( 2c + 2 c + ckw? k)(Vm Vn )
1
+
?
w.h.p.
cVn
4
(cVn )1/2
(15)
Proof. See Section 7.1 in the supplementary material.
From the analysis of Newton?s method we know that if the Newton decrement n (w) is smaller
than 1/4, the variable w is in the local neighborhood of Newton?s method; see e.g., Chapter 9 of [4].
From the result in Proposition 3, we obtain a sufficient condition to guarantee that n (wm ) < 1/4
which implies that wm , which is a Vm optimal solution for the regularized empirical loss Rm , i.e.,
?
Rm (wm ) Rm (wm
) ? Vm , is in the local neighborhood of the optimal argument of Rn that
Newton?s method converges quadratically.
Unfortunately, the quadratic convergence of Newton?s method for self-concordant functions is in
terms of the Newton decrement n (w) and it does not necessary guarantee quadratic convergence in terms of objective function error. To be more precise, we can show that n (wn ) ?
2
n (wm ) ; however, we can not conclude that the quadratic convergence of Newton?s method
implies Rn (wn ) Rn (wn? ) ? 0 (Rn (wm ) Rn (wn? ))2 . In the following proposition we try
to characterize an upper bound for the error Rn (wn ) Rn (wn? ) in terms of the squared error
(Rn (wm ) Rn (wn? ))2 using the quadratic convergence property of Newton decrement.
Proposition 4. Consider wm as a variable that is in the local neighborhood of the optimal argument
of the risk Rn where Newton?s method has a quadratic convergence rate, i.e., n (wm ) ? 1/4. Recall the definition of the variable wn in (7) as the updated variable using Newton step. If Assumption
1 and 2 hold, then the difference Rn (wn ) Rn (wn? ) is upper bounded by
Rn (wn )
Rn (wn? ) ? 144(Rn (wm )
Rn (wn? ))2 .
(16)
Proof. See Section 7.2 in the supplementary material.
The result in Proposition 4 provides an upper bound for the sub-optimality Rn (wn ) Rn (wn? ) in
terms of the sub-optimality of variable wm for the risk Rn , i.e., Rn (wm ) Rn (wn? ). Recall that we
?
know that wm is in the statistical accuracy of Rm , i.e., Rm (wm ) Rm (wm
) ? Vm , and we aim to
show that the updated variable wn stays in the statistical accuracy of Rn , i.e., Rn (wn ) Rn (wn? ) ?
Vn . This can be done by showing that the upper bound for Rn (wn ) Rn (wn? ) in (16) is smaller
than Vn . We proceed to derive an upper bound for the sub-optimality Rn (wm ) Rn (wn? ) in the
following proposition.
Proposition 5. Consider the sets Sm and Sn as subsets of the training set T such that Sm ? Sn ?
T . We assume that the number of samples in the sets Sm and Sn are m and n, respectively. Further,
?
define wm as an Vm optimal solution of the risk Rm , i.e., Rm (wm ) Rm
? Vm . If Assumption
1-3 hold, then the empirical risk error Rn (wm ) Rn (wn? ) of the variable wm corresponding to the
set Sn is bounded above by
2(n m)
c(Vm Vn ) ? 2
Rn (wm ) Rn (wn? ) ? Vm +
(Vn m + Vm )+2 (Vm Vn )+
kw k w.h.p.
n
2
(17)
Proof. See Section 7.3 in the supplementary material.
The result in Proposition 5 characterizes the sub-optimality of the variable wm , which is an Vm
sub-optimal solution for the risk Rm , with respect to the empirical risk Rn associated with the set
Sn .
The results in Proposition 3, 4, and 5 lead to the result in Theorem 1. To be more precise, from the
result in Proposition 3 we obtain that the condition in (10) implies that wm is in the local neighborhood of the optimal argument of Rn and n (wm ) ? 1/4. Hence, the hypothesis of Proposition
4 is satisfied and we have Rn (wn ) Rn (wn? ) ? 144(Rn (wm ) Rn (wn? ))2 . This result paired
with the result in Proposition 5 shows that if the condition in (11) is satisfied we can conclude that
Rn (wn ) Rn (wn? ) ? Vn which completes the proof of Theorem 1.
6
10
0
0
SGD
SAGA
Newton
Ada Newton
10 -2
?
RN (w) ? RN
10 -2
?
RN (w) ? RN
10
SGD
SAGA
Newton
Ada Newton
10 -4
10 -6
10 -8
10 -4
10 -6
10 -8
10 -10
10 -10
0
5
10
15
20
25
0
Number of passes
10
20
30
40
50
60
70
80
90
Runtime (s)
Figure 1: Comparison of SGD, SAGA, Newton, and Ada Newton in terms of number of effective
passes over dataset (left) and runtime (right) for the protein homology dataset.
5
Experiments
In this section, we study the performance of Ada Newton and compare it with state-of-the-art in
solving a large-scale classification problem. In the main paper we only use the protein homology
dataset provided on KDD cup 2004 website. Further numerical experiments on various datasets
can be found in Section 7.4 in the supplementary material. The protein homology dataset contains
N = 145751 samples and the dimension of each sample is p = 74. We consider three algorithms
to compare with the proposed Ada Newton method. One of them is the classic Newton?s method
with backtracking line search. The second algorithm is Stochastic Gradient Descent (SGD) and the
last one is the SAGA method introduced in [6]. In our experiments, we use logistic loss and set the
regularization parameters as c = 200 and Vn = 1/n.
The stepsize of SGD in our experiments is 2 ? 10 2 . Note that picking larger stepsize leads to faster
but less accurate convergence and choosing smaller stepsize improves the accuracy convergence
with the price of slower convergence rate. The stepsize for SAGA is hand-optimized and the best
performance has been observed for ? = 0.2 which is the one that we use in the experiments.
For Newton?s method, the backtracking line search parameters are ? = 0.4 and = 0.5. In the
implementation of Ada Newton we increase the size of the training
p set by factor 2 at each iteration,
i.e., ? = 2 and we observe that the condition krRn (wn )k < ( 2c)Vn is always satisfied and there
is no need for reducing the factor ?. Moreover, the size of initial training set is m0 = 124. For
the warmup step that we need to get into to the quadratic neighborhood of Newton?s method we
use the gradient descent method. In particular, we run gradient descent with stepsize 10 3 for 100
iterations. Note that since the number of samples is very small at the beginning, m0 = 124, and the
regularizer is very large, the condition number of problem is very small. Thus, gradient descent is
able to converge to a good neighborhood of the optimal solution in a reasonable time. Notice that
the computation of this warm up process is very low and is equal to 12400 gradient evaluations.
This number of samples is less than 10% of the full training set. In other words, the cost is less than
10% of one pass over the dataset. Although, this cost is negligible, we consider it in comparison
with SGD, SAGA, and Newton?s method. We would like to mention that other algorithms such as
Newton?s method and stochastic algorithms can also be used for the warm up process; however, the
gradient descent method sounds the best option since the gradient evaluation is not costly and the
problem is well-conditioned for a small training set .
The left plot in Figure 1 illustrates the convergence path of SGD, SAGA, Newton, and Ada Newton
for the protein homology dataset. Note that the x axis is the total number of samples used divided
by the size of the training set N = 145751 which we call number of passes over the dataset. As
we observe, The best performance among the four algorithms belongs to Ada Newton. In particular,
?
Ada Newton is able to achieve the accuracy of RN (w) RN
< 1/N by 2.4 passes over the dataset
which is very close to theoretical result in Theorem 1 that guarantees accuracy of order O(1/N )
after ?/(? 1) = 2 passes over the dataset. To achieve the same accuracy of 1/N Newton?s
method requires 7.5 passes over the dataset, while SAGA needs 10 passes. The SGD algorithm can
not achieve the statistical accuracy of order O(1/N ) even after 25 passes over the dataset.
Although, Ada Newton and Newton outperform SAGA and SGD, their computational complexity
are different. We address this concern by comparing the algorithms in terms of runtime. The right
7
plot in Figure 1 demonstrates the convergence paths of the considered methods in terms of runtime.
As we observe, Newton?s method requires more time to achieve the statistical accuracy of 1/N
relative to SAGA. This observation justifies the belief that Newton?s method is not practical for
large-scale optimization problems, since by enlarging p or making the initial solution worse the
performance of Newton?s method will be even worse than the ones in Figure 1. Ada Newton resolves
this issue by starting from small sample size which is computationally less costly. Ada Newton
also requires Hessian inverse evaluations, but the number of inversions is proportional to log? N .
Moreover, the performance of Ada Newton doesn?t depend on the initial point and the warm up
process is not costly as we described before. We observe that Ada Newton outperforms SAGA
significantly. In particular it achieves the statistical accuracy of 1/N in less than 25 seconds, while
SAGA achieves the same accuracy in 62 seconds. Note that since the variable wN is in the quadratic
neighborhood of Newton?s method for RN the convergence path of Ada Newton becomes quadratic
eventually when the size of the training set becomes equal to the size of the full dataset. It follows
that the advantage of Ada Newton with respect to SAGA is more significant if we look for a suboptimality less than Vn . We have observed similar performances for other datasets such as A9A,
W8A, COVTYPE, and SUSY ? see Section 7.4 in the supplementary material.
6
Discussions
As explained in Section 4, Theorem 1 holds because condition (10) makes wm part of the quadratic
convergence region of Rn . From this fact, it follows that the Newton iteration makes the suboptimality gap Rn (wn ) Rn (wn? ) the square of the suboptimality gap Rn (wm ) Rn (wn? ). This yields
condition (11) and is the fact that makes Newton steps valuable in increasing the sample size. If we
replace Newton iterations by any method with linear convergence rate, the orders of both sides on
condition (11) are the same. This would make aggressive increase of the sample size unlikely.
In Section 1 we pointed out four reasons that challenge the development of stochastic Newton methods. It would not be entirely accurate to call Ada Newton a stochastic method because it doesn?t
rely on stochastic descent directions. It is, nonetheless, a method for ERM that makes pithy use of
the dataset. The challenges listed in Section 1 are overcome by Ada Newton because:
(i) Ada Newton does not use line searches. Optimality improvement is guaranteed by increasing the sample size.
(ii) The advantages of Newton?s method are exploited by increasing the sample size at a rate
that keeps the solution for sample size m in the quadratic convergence region of the risk
associated with sample size n = ?m. This allows aggressive growth of the sample size.
(iii) The ERM problem is not necessarily strongly convex. A regularization of order Vn is added
to construct the empirical risk Rn
(iv) Ada Newton inverts approximately log? N Hessians. To be more precise, the total number
of inversion could be larger than log? N because of the backtracking step. However, the
backtracking step is bypassed when the number of samples is sufficiently large.
It is fair to point out that items (ii) and (iv) are true only to the extent that the damped phase in
Algorithm 1 is not significant. Our numerical experiments indicate that this is true but the conclusion
is not warranted by out theoretical bounds except when the dataset is very large. This suggests the
bounds are loose and that further research is warranted to develop tighter bounds.
References
[1] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[2] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[3] L?eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems 20, Vancouver, British Columbia, Canada, pages 161?168, 2007.
[4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York,
NY, USA, 2004.
8
[5] Hadi Daneshmand, Aur?elien Lucchi, and Thomas Hofmann. Starting small - learning with adaptive
sample sizes. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016,
New York City, NY, USA, pages 1463?1471, 2016.
[6] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method
with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, Montreal, Quebec, Canada, pages 1646?1654, 2014.
[7] Murat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled Newton methods. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada, pages 3052?3060, 2015.
[8] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Competing with the empirical risk minimizer in a single pass. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris,
France, July 3-6, 2015, pages 728?763, 2015.
[9] Mert G?urb?uzbalaban, Asuman Ozdaglar, and Pablo Parrilo. A globally convergent incremental Newton
method. Mathematical Programming, 151(1):283?313, 2015.
[10] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 26. Lake Tahoe, Nevada, United States.,
pages 315?323, 2013.
[11] Aurelien Lucchi, Brian McWilliams, and Thomas Hofmann. A variance reduced stochastic newton
method. arXiv, 2015.
[12] Aryan Mokhtari and Alejandro Ribeiro. Res: Regularized stochastic BFGS algorithm. IEEE Transactions
on Signal Processing, 62(23):6089?6104, 2014.
[13] Aryan Mokhtari and Alejandro Ribeiro. Global convergence of online limited memory BFGS. Journal of
Machine Learning Research, 16:3151?3181, 2015.
[14] Philipp Moritz, Robert Nishihara, and Michael I. Jordan. A linearly-convergent stochastic L-BFGS algorithm. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS
2016, Cadiz, Spain, pages 249?258, 2016.
[15] Yu Nesterov. Introductory Lectures on Convex Programming Volume I: Basic course. Citeseer, 1998.
[16] Yurii Nesterov et al. Gradient methods for minimizing composite objective function. 2007.
[17] Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM
Journal on Control and Optimization, 30(4):838?855, 1992.
[18] Zheng Qu, Peter Richt?arik, Martin Tak?ac, and Olivier Fercoq. SDNA: stochastic dual Newton ascent for
empirical risk minimization. In Proceedings of the 33nd International Conference on Machine Learning,
ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1823?1832, 2016.
[19] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical
Statistics, pages 400?407, 1951.
[20] Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems
25. Lake Tahoe, Nevada, United States., pages 2672?2680, 2012.
[21] Nicol N. Schraudolph, Jin Yu, and Simon G?unter. A stochastic quasi-Newton method for online convex
optimization. In Proceedings of the Eleventh International Conference on Artificial Intelligence and
Statistics, AISTATS 2007, San Juan, Puerto Rico, pages 436?443, 2007.
[22] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and
uniform convergence. The Journal of Machine Learning Research, 11:2635?2670, 2010.
[23] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss.
The Journal of Machine Learning Research, 14:567?599, 2013.
[24] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105?145, 2016.
[25] Vladimir Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 2013.
[26] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction.
SIAM Journal on Optimization, 24(4):2057?2075, 2014.
9
| 6262 |@word mild:1 inversion:5 seems:1 stronger:1 consequential:1 norm:1 reused:1 nd:2 urb:1 citeseer:1 sgd:9 mention:1 reduction:3 initial:8 contains:3 united:2 outperforms:1 current:2 comparing:1 must:1 numerical:4 subsequent:2 kdd:1 hofmann:4 plot:2 update:6 juditsky:1 intelligence:2 website:1 item:1 amir:1 beginning:1 core:1 characterization:1 provides:1 philipp:1 tahoe:2 simpler:1 zhang:4 warmup:1 mathematical:3 prove:1 combine:1 introductory:1 eleventh:1 introduce:2 theoretically:1 upenn:2 indeed:1 andrea:1 globally:1 resolve:1 increasing:7 becomes:2 provided:1 estimating:1 matched:1 notation:1 moreover:3 daneshmand:3 bounded:3 medium:1 spain:1 argmin:6 minimizes:2 finding:1 guarantee:7 mitigate:1 growth:2 runtime:4 rm:18 k2:1 demonstrates:1 control:2 unit:1 ozdaglar:1 enjoy:1 mcwilliams:1 mcauliffe:1 before:1 negligible:1 local:6 consequence:1 despite:1 sutton:1 path:3 approximately:1 suggests:1 challenging:1 limited:1 practical:2 practice:2 implement:2 empirical:25 eth:3 thought:1 significantly:1 boyd:1 composite:2 word:1 protein:4 get:1 undesirable:1 selection:1 close:3 risk:34 applying:3 seminal:1 optimize:1 equivalent:1 starting:2 convex:13 roux:1 vandenberghe:1 classic:1 stability:1 coordinate:3 updated:3 annals:1 shamir:1 suppose:3 olivier:2 programming:3 us:1 hypothesis:1 element:1 roy:1 satisfying:1 observed:2 solved:5 region:2 richt:1 valuable:1 vanishes:1 convexity:1 complexity:1 nesterov:3 depend:1 solving:4 predictive:1 exit:1 various:2 chapter:1 regularizer:2 fast:2 effective:1 artificial:2 neighborhood:9 outcome:1 choosing:1 shalev:3 larger:3 solve:5 supplementary:5 say:5 statistic:3 online:2 advantage:3 nevada:2 mert:1 reset:1 rln:1 loop:3 realization:1 date:1 achieve:4 convergence:28 double:1 regularity:2 sea:2 incremental:3 converges:1 staying:3 boris:1 derive:2 depending:1 develop:1 montreal:2 ac:1 progress:4 solves:6 implemented:1 implies:7 come:1 indicate:1 switzerland:3 direction:3 stochastic:27 settle:1 material:5 require:1 proposition:16 tighter:1 brian:1 cvn:15 rong:1 hold:9 sufficiently:4 considered:1 m0:10 achieves:3 estimation:1 robbins:1 city:2 puerto:1 minimization:8 always:1 arik:1 aim:2 vn1:1 pn:4 shrinkage:1 derived:1 focus:1 june:1 improvement:1 a9a:1 unlikely:1 tak:1 quasi:2 france:1 subroutine:1 issue:1 overall:1 dual:4 classification:2 among:1 priori:1 colt:1 development:2 art:1 equal:3 once:1 never:2 having:1 construct:1 kw:3 progressive:1 look:1 icml:2 yu:2 jon:1 simplify:1 summand:5 beck:1 phase:6 karthik:1 attempt:1 possibility:1 zheng:1 evaluation:5 uncommon:1 damped:2 accurate:2 partial:1 necessary:1 experience:1 unter:1 ohad:1 iv:5 re:1 theoretical:3 increased:1 teboulle:1 sidford:1 zn:3 ada:31 cost:4 introducing:1 subset:5 uniform:1 johnson:1 learnability:1 characterize:2 dependency:1 mokhtari:3 accomplish:1 proximal:2 fundamental:2 siam:3 aur:1 international:4 stay:1 vm:27 picking:1 michael:2 lucchi:4 squared:1 satisfied:14 containing:1 hn:5 juan:1 worse:2 american:1 rrn:8 elien:1 aggressive:2 parrilo:1 bfgs:3 includes:1 satisfy:1 depends:2 try:1 nishihara:1 sup:2 characterizes:1 wm:58 francis:2 option:1 shai:3 simon:2 monro:1 contribution:1 square:1 accuracy:37 hadi:3 variance:4 likewise:2 yield:2 asuman:1 backtrack:1 history:1 reach:1 aribeiro:1 checked:1 definition:2 nonetheless:2 involved:1 associated:5 proof:8 sampled:1 dataset:21 manifest:1 recall:3 improves:1 back:1 rico:1 vm0:6 higher:1 rie:1 evaluated:4 done:1 strongly:7 generality:1 just:1 implicit:1 until:2 hand:3 parse:1 logistic:1 grows:1 usa:3 verify:1 requiring:1 homology:4 true:2 regularization:6 hence:1 moritz:1 iteratively:2 self:3 suboptimality:4 complete:2 hallmark:1 krl:1 recently:1 volume:1 association:1 lieven:1 kwk2:3 significant:2 cup:1 cambridge:1 pointed:1 frostig:1 access:1 alejandro:3 add:1 optimizing:1 inf:3 moderate:5 belongs:1 susy:1 certain:1 inequality:2 success:2 exploited:1 seen:1 herbert:1 converge:4 signal:1 july:1 ii:4 full:3 sound:1 stephen:1 sham:1 faster:4 bach:2 long:2 schraudolph:1 lin:1 divided:1 equally:1 paired:1 basic:1 essentially:1 foremost:1 arxiv:1 iteration:18 c1:2 addition:1 want:1 addressed:1 completes:1 limn:1 extra:2 operate:1 pass:13 strict:1 ascent:3 quebec:2 sridharan:1 jordan:2 call:2 krrn:4 iii:3 wm0:2 wn:67 easy:1 iterate:1 pennsylvania:2 competing:1 polyak:1 idea:2 tradeoff:1 bartlett:1 defazio:1 accelerating:1 effort:1 peter:2 proceed:2 hessian:13 york:3 remark:1 listed:1 reduced:1 outperform:1 notice:1 estimated:1 incipient:1 four:3 krf:1 undertaken:1 lacoste:1 imaging:1 asymptotically:1 geometrically:1 run:1 inverse:2 striking:1 arrive:1 reasonable:1 vn:53 lake:2 entirely:1 bound:13 guaranteed:2 convergent:2 quadratic:17 annual:1 aurelien:3 bousquet:1 nathan:1 argument:9 min:1 optimality:7 fercoq:1 performing:1 martin:1 circumvented:1 developing:2 smaller:4 kakade:1 qu:1 making:1 explained:1 erm:16 taken:1 ln:10 computationally:1 zurich:3 eventually:2 loose:1 know:2 ge:1 end:1 yurii:1 available:1 permit:2 observe:9 stepsize:7 alternative:1 schmidt:1 altogether:1 rp:1 slower:1 thomas:4 subsampling:1 cf:8 newton:95 log10:2 anatoli:1 exploit:1 eon:1 objective:4 added:1 quantity:1 costly:5 gradient:26 w0:2 extent:1 reason:1 index:1 ratio:1 minimizing:2 vladimir:1 nc:1 unfortunately:1 robert:1 statement:1 implementation:3 murat:1 proper:1 motivates:1 unknown:3 contributed:1 perform:1 upper:6 policy:1 observation:1 w8a:1 datasets:5 sm:8 finite:1 descent:9 jin:1 immediate:1 extended:1 precise:3 rn:80 aryan:3 canada:3 introduced:1 inverting:3 pablo:1 required:2 paris:1 z1:3 optimized:1 quadratically:1 address:1 beyond:1 able:2 challenge:4 summarize:1 rf:5 memory:1 belief:1 difficulty:1 warm:3 regularized:11 circumvent:1 rely:1 business:1 improve:1 imply:1 julien:1 axis:1 excel:1 columbia:1 sn:12 literature:1 geometric:2 checking:1 vancouver:1 nicol:1 relative:1 law:1 loss:19 lecture:1 ckw:2 proportional:1 proven:1 srebro:1 sufficient:1 xiao:1 thresholding:1 course:1 repeat:1 last:2 side:3 moot:1 understand:1 erdogdu:1 benefit:1 overcome:1 dimension:1 evaluating:1 valid:1 avoids:1 doesn:3 author:1 qualitatively:1 adaptive:5 san:1 ribeiro:3 transaction:1 emphasize:1 keep:2 confirm:1 global:2 conceptual:1 assumed:1 conclude:3 shwartz:3 search:9 continuous:3 iterative:1 decade:1 nature:1 zk:7 bypassed:1 nicolas:1 warranted:2 bottou:1 necessarily:2 marc:1 aistats:2 main:5 montanari:1 linearly:1 decrement:4 fair:2 allowed:1 cvm:2 ny:3 tong:4 sub:7 saga:14 inverts:1 exponential:1 third:1 admissible:1 theorem:12 enlarging:1 british:1 showing:1 list:1 r2:2 covtype:1 concern:1 exists:4 vapnik:2 adding:1 importance:1 conditioned:1 illustrates:1 justifies:1 gap:2 backtracking:9 simply:1 ez:2 springer:1 ch:3 minimizer:2 satisfies:1 goal:2 acceleration:2 lipschitz:3 replace:2 price:1 except:1 reducing:1 averaging:2 conservative:1 total:3 pas:4 concordant:3 aaron:2 formally:1 support:1 mark:1 ethz:3 accelerated:1 |
5,817 | 6,263 | Learning Deep Parsimonious Representations
Renjie Liao1 , Alexander Schwing2 , Richard S. Zemel1,3 , Raquel Urtasun1
University of Toronto1
University of Illinois at Urbana-Champaign2
Canadian Institute for Advanced Research3
{rjliao, zemel, urtasun}@cs.toronto.edu, [email protected]
Abstract
In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a
clustering based regularization that encourages parsimonious representations. Our
k-means style objective is easy to optimize and flexible, supporting various forms
of clustering, such as sample clustering, spatial clustering, as well as co-clustering.
We demonstrate the effectiveness of our approach on the tasks of unsupervised
learning, classification, fine grained categorization, and zero-shot learning.
1
Introduction
In recent years, deep neural networks have been shown to perform extremely well on a variety of
tasks including classification [21], semantic segmentation [13], machine translation [27] and speech
recognition [16]. This has led to their adoption across many areas such as computer vision, natural
language processing and robotics [16, 21, 22, 27]. Three major advances are responsible for the
recent success of neural networks: the increase in available computational resources, access to large
scale data sets, and several algorithmic improvements.
Many of these algorithmic advances are related to regularization, which is key to prevent overfitting
and improve generalization of the learned classifier, as the current trend is to increase the capacity of
neural nets. For example, batch normalization [18] is used to normalize intermediate representations
which can be interpreted as imposing constraints. In contrast, dropout [26] removes a fraction of the
learned representations at random to prevent co-adaptation. Learning of de-correlated activations [6]
shares a similar idea since it explicitly discourages correlation between the units.
In this paper we propose a new type of regularization that encourages the network representations to
form clusters. As a consequence, the learned feature space is compactly representable, facilitating
generalization. Furthermore, clustering supports interpretability of the learned representations. We
formulate our regularization with a k-means style objective which is easy to optimize, and investigate
different types of clusterings, including sample clustering, spatial clustering, and co-clustering.
We demonstrate the generalization performance of our proposed method in several settings: autoencoders trained on the MNIST dataset [23], classification on CIFAR10 and CIFAR100 [20], as
well as fine-grained classification and zero-shot learning on the CUB-200-2011 dataset [34]. We
show that our approach leads to significant wins in all these scenarios. In addition, we are able to
demonstrate on the CUB-200-2011 dataset that the network representation captures meaningful part
representations even though it is not explicitly trained to do so.
2
Related Work
Standard neural network regularization involves penalties on the weights based on the norm of the
parameters [29, 30]. Also popular are regularization methods applied to intermediate representations,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
such as Dropout [26], Drop-Connect [32], Maxout [10] and DeCov [6]. These approaches share the
aim of preventing the activations in the network to be correlated. Our work can be seen as a different
form of regularization, where we encourage parsimonious representations.
A variety of approaches have applied clustering to the parameters of the neural network with the
aim of compressing the network. Compression rates of more than an order of magnitude were
demonstrated in [11] without sacrificing accuracy. In the same spirit hash functions were exploited in
[5]. Early approaches to compression include biased weight decay [12] and [14, 24], which prunes
the network based on the Hessian of the loss function.
Recently, various combinations of clustering with representation learning have been proposed. We
categorize them broadly into two areas: (i) work that applies clustering after having learned a
representation, and (ii) approaches that jointly optimize the learning and clustering objectives. [4]
combines deep belief networks (DBN) with non-parametric maximum-margin clustering in a posthoc manner: A DBN is trained layer-wise to obtain an intermediate representation of the data;
non-parametric maximum-margin clustering is then applied to the data representation. Another line of
work utilizes an embedding of the deep network, which can be based on annotated data [15], or from
a learned unsupervised method such as a stacked auto-encoder [28]. In these approaches, the network
is trained to approximate the embedding, and subsequently either k-means or spectral clustering is
performed to partition the space. An alternative is to use non-negative matrix factorization, which
represents a given data matrix as the product of components [31]. This deep non-negative matrix
factorization is trained using the reconstruction loss rather than a clustering objective. Nonetheless,
it was shown that factors lower in the hierarchy have superior clustering performance on lowlevel concepts while factors later in the hierarchy cluster high-level concepts. The aforementioned
approaches differ from our proposed technique, since we aim at jointly learning a representation that
is parsimonious via a clustering regularization.
Also related are approaches that utilize sparse coding. Wang et al. [33] unrolls the iterations forming
the sparse codes and optimizes end-to-end the involved parameters using a clustering objective as
loss function [33]. The proposed framework is further augmented by clustering objectives applied to
intermediate representations, which act as feature regularization within the unrolled optimization.
They found that features lower in the unrolled hierarchy cluster low-level concepts, while features
later in the hierarchy capture high-level concepts. Our method differs in that we use convolutional
neural networks rather than unrolling a sparse coding optimization.
In the context of unsupervised clustering [35] exploited agglomerative clustering as a regularizer; this
approach was formulated as a recurrent network. In contrast we employ a k-means like clustering
objective which simplifies the optimization significantly and does not require a recurrent procedure.
Furthermore, we investigate both unsupervised and supervised learning.
3
Learning Deep Parsimonious Representations
In this section, we introduce our new clustering based regularization which not only encourages the
neural network to learn more compact representations, but also enables interpretability of the neural
network. We first show that by exploiting different unfoldings of the representation tensor, we obtain
multiple types of clusterings, each possessing different properties. We then devise an efficient online
update to jointly learn the clustering with the parameters of the neural network.
3.1
Clustering of Representations
We first introduce some notation. We refer to [K] as the set of K positive integers, i.e., [K] =
{1, 2, ..., K}. We use S\A to denote the set S with elements from the set A removed. A tensor is
a multilinear map over a set of vector spaces. In tensor terminology, n-mode vectors of a D-order
tensor Y ? RI1 ?I2 ?????ID are In -dimensional vectors obtained from Y by varying the index in In dimension, while keeping all other indices fixed. An n-mode matrix unfolding of a tensor is a matrix
{In }?{Ij |j?[D]\n}
which has all n-mode vectors as its columns [7]. Formally we use the operator
Q T
to denote the n-mode matrix unfolding, which returns a matrix of size In ? j?[D]\n Ij . Similarly,
we definee T {Ii ,Ij }?{Ik |k?[D]\{i,j}} to be an (i, j)-mode matrix unfolding operator. In this case a
column vector is a concatenation of one i-mode vector and one j-mode vector. We denote the m-th
row vector of a matrix X as Xm .
2
N
H
C
Representations
(a)
?
W
(b)
Figure 1: (A) Sample clustering and (B) spatial clustering. Samples, pixels, and channels are
visualized as multi-channel maps, cubes, and maps in depth respectively. The receptive fields in the
input image are denoted as red boxes.
In this paper we assume the representation of one layer within a neural network to be a 4-D tensor
Y ? RN ?C?H?W , where N , C, H and W are the number of samples within a mini-batch, the
number of hidden units, the height and width of the representation respectively. Note that C, H and
W can vary between layers, and in the case of a fully connected layer, the dimensions along height
and width become a singleton and the tensor degenerates to a matrix.
Let L be the loss function of a neural network. In addition, we refer to the clustering regularization of
a single layer via R. The final objective is L + ?R, where ? adjusts the importance of the clustering
regularization. Note that we can add a regularization term for any subset of layers, but we focus on a
single layer for notational simplicity. In what follows, we show three different types of clustering,
each possessing different properties. In our framework any variant can be applied to any layer.
(A) Sample Clustering: We first investigate clustering along the sample dimension. Since the
cluster assignments of different layers are not linked, each layer is free to cluster examples in a
different way. For example, in a ConvNet, bottom layer representations may focus on low-level visual
cues, such as color and edges, while top layer features may focus on high-level attributes which have
a more semantic meaning. We refer the reader to Fig. 1 (a) for an illustration. In particular, given the
representation tensor Y, we first unfold it into a matrix T {N }?{H,W,C} (Y) ? RN ?HW C . We then
encourage the samples to cluster as follows:
N
2
X
1
{N }?{H,W,C}
Rsample (Y, ?) =
(Y)n ? ?zn
,
T
2N CHW n=1
(1)
where ? is a matrix of size K ? HW C encoding all cluster centers, with K the total number of
clusters. zn ? [K] is a discrete latent variable corresponding to the n-th sample. It indicates which
cluster this sample belongs to. Note that for a fully connected layer, the formulation is the same
except that T {N }?{H,W,C} (Y)n and ?zn are C-sized vectors since H = W = 1 in this case.
(B) Spatial Clustering: The representation of one sample can be regarded as a C-channel ?image.?
Each spatial location within that ?image? can be thought of as a ?pixel,? and is a vector of size C
(shown as a colored bar in Fig. 1). For a ConvNet, every ?pixel? has a corresponding receptive field
covering a local region in the input image. Therefore, by clustering ?pixels? of all images during
learning, we expect to model local parts shared by multiple objects or scenes. To achieve this, we
adopt the unfolding operator T {N,H,W }?{C} (Y) and use
Rspatial (Y, ?) =
NX
HW
1
kT {N,H,W }?{C} (Y)i ? ?zi k2 .
2N CHW i=1
(2)
Note that although we use the analogy of a ?pixel,? when using text data a ?pixel? may corresponds
to words. For spatial clustering the dimension of the matrix ? is K ? C.
(C) Channel Co-Clustering: This regularizer groups the channels of different samples directly,
thus co-clustering samples and filters. We expect this type of regularization to model re-occurring
3
Algorithm 1 : Learning Parsimonious Representations
1: Initialization: Maximum training iteration R, batch size B, smooth weight ?, set of clustering
layers S and set of cluster centers {?0k |k ? [K]}, update period M
2: For iteration t = 1, 2, ..., R:
3:
For layer l = 1, 2, ..., L:
4:
Compute the output representation of layer l as x.
5:
If l ? S:
2
6:
Assigning cluster zn = argmin kXn ? ?t?1
k k , ?n ? [B].
k
P
T
7:
Compute cluster center ?
?k = |N1k | n?Nk Xn , where Nk = [B] {n|zn = k}.
8:
Smooth cluster center ?tk = ??
?k + (1 ? ?)?t?1
k
9:
End
10:
End
11:
Compute the gradients with cluster centers ?tk fixed.
12:
Update weights.
13:
Update drifted cluster centers using Kmeans++ every M iterations.
14: End
patterns shared not only among different samples but also within each sample. Relying on the
unfolding operator T {N,C}?{H,W } (Y), we formulate this type of clustering objective as
NC
X
1
Rchannel (Y, ?) =
kT {N,C}?{H,W } (Y)i ? ?zi k2 .
2N CHW i=1
(3)
Note that the dimension of the matrix ? is K ? HW in this case.
3.2
Efficient Online Update
We now derive an efficient online update to jointly learn the weights while clustering the representations of the neural network. In particular, we illustrate the sample clustering case while noting that
the other types can be derived easily by applying the corresponding unfolding operator. For ease of
notation, we denote the unfolded matrix T {N }?{H,W,C} (Y) as X. The gradient of the clustering
regularization layer w.r.t. its input representation X can be expressed as,
?
?
X
?R
1
1
?Xn ? ?zn ?
Xn ? ?zp ? ,
(4)
=
?Xn
N CHW
Qzn
zp =zn ,?p?[N ]
where Qzn is the number of samples which belong to the zn -th cluster. This gradient is then
backpropagated through the network to obtain the gradient w.r.t. the parameters of the network.
The time and space complexity of the gradient computation of one regularization layer are
max(O(KCHW ), O(N CHW )) and O(N CHW ) respectively. Note that we can cache the centered data Xn ? ?zn in the forward pass to speed up the gradient computation.
The overall learning algorithm of our framework is summarized in Alg. 1. In the forward pass, we
first compute the representation of the n-th sample as Xn for each layer. We then infer the latent
cluster label zn for each sample based on the distance to the cluster centers ?t?1
from the last time
k
step t ? 1, and assign the sample to the cluster center which has the smallest distance. Once all the
cluster assignments are computed, we estimate the cluster centers ?
?k based on the new labels of the
current batch.
We then combine the estimate based on the current batch with the former cluster center. This is
done via an online update. We found an online update together with the random restart strategy
to work well in practice, as the learning of the neural network proceeds one mini-batch at a time,
and as it is too expensive to recompute the cluster assignment for all data samples in every iteration.
Since we trust our current cluster center estimate more than older ones, we smooth the estimation
by using an exponential moving average. The cluster center estimate at iteration t is obtained via
?tk = ??
?k + (1 ? ?)?t?1
k , where ? is a smoothing weight. However, as the representation learned
by the neural network may go through drastic changes, especially in the beginning of training, some
4
Measurement
Train
Test
AE
2.69 ? 0.12 3.61 ? 0.13
AE + Sample-Clustering 2.73 ? 0.01 3.50 ? 0.01
Table 1: Autoencoder Experiments on MNIST. We report the average of mean reconstruction error
over 4 trials and the corresponding standard deviation.
Dataset
Caffe
Weight Decay
DeCov
Dropout
Sample-Clustering
Spatial-Clustering
Channel Co-Clustering
CIFAR10 Train
94.87 ? 0.14
95.34 ? 0.27
88.78 ? 0.23
99.10 ? 0.17
89.93 ? 0.19
90.50 ? 0.05
89.26 ? 0.25
CIFAR10 Test
76.32 ? 0.17
76.79 ? 0.31
79.72 ? 0.14
77.45 ? 0.21
81.05 ? 0.41
81.02 ? 0.12
80.65 ? 0.23
CIFAR100 Train
68.01 ? 0.64
69.32 ? 0.51
77.92
60.77 ? 0.47
63.60 ? 0.55
64.38 ? 0.38
63.42 ? 1.34
CIFAR100 Test
46.21 ? 0.34
46.93 ? 0.42
40.34
48.70 ? 0.38
50.50 ? 0.38
50.18 ? 0.49
49.80 ? 0.25
Table 2: CIFAR10 and CIFAR 100 results. For DeCov, no standard deviation is provided for the
CIFAR100 results [6]. All our approaches outperform the baselines.
of the cluster centers may quickly be less favored and the number of incoming samples assigned to it
will be largely reduced. To overcome this issue, we exploit the Kmeans++ [3] procedure to re-sample
the cluster center from the current mini-batch. Specifically, denoting the the distance between sample
Xn and
P its nearest cluster center as dn , the probability of taking Xn as the new cluster center is
d2n / i d2i . After sampling, we replace the old cluster center with the new one and continue the
learning process. In practice, at the end of every epoch, we apply the kmeans++ update to cluster
centers for which the number of assigned samples is small. See Alg. 1 for an outline of the steps. The
overall procedure stabilizes the optimization and also increases the diversity of the cluster centers.
In the backward pass, we fix the latest estimation of the cluster centers ?tk and compute the gradient
of loss function and the gradient of the clustering objective based on Eq. (4). Then we back-propagate
all the gradients and update the weights.
4
Experiments
In this section, we conduct experiments on unsupervised, supervised and zero-shot learning on several
datasets. Our implementation based on TensorFlow [9] is publicly available.1 For initializing the
cluster centers before training, we randomly choose them from the representations obtained with the
initial network.
4.1
Autoencoder on MNIST
We first test our method on the unsupervised learning task of training an autoencoder. Our architecture
is identical to [17]. For ease of training we did not tie the weights between the encoder and the
decoder. We use the squared `2 reconstruction error as the loss function and SGD with momentum.
The standard training-test-split is used. We compute the mean reconstruction error over all test images
and repeat the experiments 4 times with different random initializations. We compare the baseline
model, i.e., a plain autoencoder, with one that employs our sample-clustering regularization on all
layers except the top fully connected layer. Sample clustering was chosen since this autoencoder
only contains fully connected layers. The number of clusters and the regularization weight ? of all
layers are set to 100 and 1.0e?2 respectively. For both models the same learning rate and momentum
are used. Our exact parameter choices are detailed in the Appendix. As shown in Table 1, our
regularization facilitates generalization as it suffers less from overfitting. Specifically, applying our
regularization results in lower test set error despite slightly higher training error. More importantly,
the standard deviation of the error is one order of magnitude smaller for both training and testing
when applying our regularization. This indicates that our sample-clustering regularization stabilizes
the model.
1
https://github.com/lrjconan/deep_parsimonious
5
FC-4
Conv-2 Conv-2 FC-4
Figure 2: Visualization of clusterings on CIFAR10 dataset. Rows 1, 2 each show examples belonging
to a single sample-cluster; rows 3, 4 show regions clustered via spatial clustering.
4.2
CIFAR10 and CIFAR100
In this section, we explore the CIFAR10 and CIFAR100 datasets [20]. CIFAR10 consists of 60,000
32 ? 32 images assigned to 10 categories, while CIFAR100 differentiates between 100 classes. We
use the standard split on both datasets. The quick CIFAR10 architecture of Caffe [19] is used for
benchmarking both datasets. It consists of 3 convolutional layers and 1 fully connected layer followed
by a softmax layer. The detailed parameters are publicly available on the Caffe [19] website. We
report mean accuracy averaged over 4 trials. For fully connected layers we use the sample-clustering
objective. For convolutional layers, we provide the results of all three clustering objectives, which we
refer to as ?sample-clustering,? ?spatial-clustering,? and ?channel-co-clustering? respectively. We set
all hyper-parameters based on cross-validation. Specifically, the number of cluster centers are set to
100 for all layers for both CIFAR10 and CIFAR100. ? is set to 1.0e?3 and 1.0e?2 for the first two
convolutional and the remaining layers respectively in CIFAR10; for CIFAR100, ? is set to 10 and 1
for the first convolutional layer and the remaining layers respectively. The smoothness parameter ? is
set to 0.9 and 0.95 for CIFAR10 and CIFAR100 respectively.
Generalization: In Table 2 we compare our framework to some recent regularizers, like DeCov [6],
Dropout [26] and the baseline results obtained using Caffe. We again observe that all of our methods
achieve better generalization performance.
Visualization: To demonstrate the interpretability of our learned network, we visualize sampleclustering and spatial-clustering in Fig. 2, showing the top-10 ranked images and parts per cluster. In
the case of sample-clustering, for each cluster we rank all its assigned images based on the distance
to the cluster center. We chose to show 2 clusters from the 4th fully connected layer. In the case of
spatial-clustering, we rank all ?pixels? belonging to one cluster based on the distance to the cluster
center. Note that we have one part (i.e., one receptive field region in the input image) for each
?pixel.? We chose to show 2 clusters from the 2nd convolutional layer. The receptive field of the
2nd convolutional layer is of size 18 ? 18 in the original 32 ? 32 sized image. We observe that
clusterings of the fully connected layer representations encode high-level semantic meaning. In
contrast, clusterings of the convolutional layer representations encode attributes like shape. Note that
some parts are uninformative which may be due to the fact that images in CIFAR10 are very small.
Additional clusters and visualizations on CIFAR100 are shown in the Appendix.
Quantitative Evaluation of Parsimonious Representation: We quantitatively evaluate our
learned parsimonious representation on CIFAR100. Since only the image category is provided
as ground truth, we investigate sample clustering using the 4th fully connected layer where representations capture semantic meaning. In particular, we apply K-means clustering to the learned
representation extracted from the model with and without sample clustering respectively. For both
cases, we set the number of clusters to be 100 and control the random seed to be the same. The
most frequent class label within one cluster is assigned to all of its members. Then we compute the
normalized mutual information (NMI) [25] to measure the clustering accuracy. The average results
over 10 runs are shown in Table 3. Our representations achieve significantly better clustering quality
6
Method
Baseline
Sample-Clustering
NMI
0.4122 ? 0.0012
0.4914 ? 0.0011
Table 3: Normalized mutual information of sample clustering on CIFAR100.
Method
Train
Test
DeCAF [8]
58.75
Sample-Clustering
100.0 61.77
Spatial-Clustering
100.0 61.67
Channel Co-Clustering 100.0 61.49
Table 4: Classification accuracy on CUB-200-2011.
compared to the baseline which suggests that they are distributed in a more compact way in the
feature space.
4.3
CUB-200-2011
Next we test our framework on the Caltech-UCSD birds dataset [34] which contains 11,788 images
of 200 different categories. We follow the dataset split provided by [34] and the common practice
of cropping the image using the ground-truth bounding box annotation of the birds [8, 36]. We use
Alex-Net [21] pretrained on ImageNet as the base model and adapt the last layer to fit classification
of 200 categories. We resize the image to 227 ? 227 to fit the input size. We add clusterings to all
layers except the softmax-layer. Based on cross-validation, the number of clusters are set to 200 for
all layers. For convolutional layers, we set ? to 1.0e?5 for the first (bottom) 2 and use 1.0e?4 for
the remaining ones. For fully connected layers, we set ? to 1.0e?3 and ? is equal to 0.5. We apply
Kmeans++ to replace cluster centers with less than 10 assigned samples at the end of every epoch.
Generalization: We investigate the impact of our parsimonious representation on generalization
performance. We compare with the DeCAF result reported in [8], which used the same network
to extract a representation and applied logistic regression on top for fine-tuning. We also fine-tune
Alex-Net which uses weight-decay and Dropout, and report the best result we achieved in Table 4.
We observe that for the Alex-Net architecture our clustering improves the generalization compare to
direct fine-tuning and the DeCAF result. Note that Alex-Net pretrained on ImageNet easily overfits
on this dataset as all training accuracies reach 100 percent.
Visualization: To visualize the sample-clustering and spatial-clustering we follow the setting
employed when evaluating on the CIFAR dataset. For the selected cluster center we show the 10
closest images in Fig. 3. For sample clustering, 2 clusters from the 3rd convolutional layer and the
7th fully connected layer are chosen for visualization. For spatial clustering, 2 clusters from the 2nd
and 3rd convolutional layers are chosen for visualization. More clusters are shown in the Appendix.
The receptive fields of pixels from the 2nd and 3rd convolutional layers are of sizes 59 ? 59 and
123 ? 123 in the resized 227 ? 227 image. We observe that cluster centers of sample clustering
applied to layers lower in the network capture pose and shape information, while cluster centers from
top layers model the fine-grained categories of birds. For spatial clustering, cluster centers from
different layers capture parts of birds in different scales, like the beak, chest, etc.
4.4
Zero-Shot Learning
We also investigate a zero-shot setting on the CUB dataset to see whether our parsimonious representation is applicable to unseen categories. We follow the setting in [1, 2] and use the same split
where 100, 50 and 50 classes are used as training, validation and testing (unseen classes). We use
a pre-trained Alex-Net as the baseline model and extract 4096-dimension representations from the
7th fully connected (fc) layer. We compare sample-clustering against other recent methods which
also report results of using 7th fc feature of Alex-Net. Given these features, we learn the output
embedding W via the same unregularized structured SVM as in [1, 2]:
min
W
N
1 X
max 0, ?(yn , y) + x>
n W [?(y) ? ?(yn )]) ,
N n=1 y?Y
(5)
where xn and yn are the feature and class label of the n-th sample and ? is the 0-1 loss function.. ?
is the class-attribute matrix provided by the CUB dataset, where each entry is a real-valued score
7
Conv-3 Conv-3
FC-7
Conv-3 Conv-3 Conv-2 Conv-2 FC-7
Figure 3: Visualization of sample and pixel clustering on CUB-200-2011 dataset. Row 1-4 and 5-8
show sample and spatial clusters respectively. Receptive fields are truncated to fit images.
Method
Top1 Accuracy
ALE [1]
26.9
40.3
SJE [2]
Sample-Clustering
46.1
Table 5: Zero-shot learning on CUB-200-2011.
indicating how likely a human thinks one attribute is present in a given class. We tune the hyperparameters on the validation set and report results in terms of top-1 accuracy averaged over the unseen
classes. As shown in Table 5 our approach significantly outperforms other approaches.
5
Conclusions
We have proposed a novel clustering based regularization which encourages parsimonious representations, while being easy to optimize. We have demonstrated the effectiveness of our approach on
a variety of tasks including unsupervised learning, classification, fine grained categorization, and
zero-shot learning. In the future we plan to apply our approach to even larger networks, e.g., residual
nets, and develop a probabilistic formulation which provides a soft clustering.
Acknowledgments
This work was partially supported by ONR-N00014-14-1-0232, NVIDIA and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center
(DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein are those of the authors and should not
be interpreted as necessarily representing the official policies or endorsements, either expressed or
implied, of IARPA, DoI/IBC, or the U.S. Government.
8
References
[1] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-embedding for attribute-based classification.
In Proc. CVPR, 2013.
[2] Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele. Evaluation of output embeddings for fine-grained
image classification. In Proc. CVPR, 2015.
[3] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proc. SODA, 2007.
[4] G. Chen. Deep learning with nonparametric clustering. arXiv preprint arXiv:1501.03084, 2015.
[5] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the
hashing trick. In arXiv preprint arXiv:1504.04788, 2015.
[6] M. Cogswell, F. Ahmed, R. Girshick, L. Zitnick, and D. Batra. Reducing Overfitting in Deep Networks by
Decorrelating Representations. Proc. ICLR, 2016.
[7] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM
Journal on Matrix Analysis and Applications, 2000.
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional
activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.
[9] M. A. et. al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software
available from tensorflow.org.
[10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout Networks. Proc. ICLR,
2013.
[11] S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning,
Trained Quantization and Huffman Coding. In Proc. ICLR, 2016.
[12] S. J. Hanson and L. Y. Pratt. Comparing biases for minimal network construction with back-propagation.
In Proc. NIPS, 1989.
[13] B. Hariharan, P. Arbel?ez, R. Girshick, and J. Malik. Simultaneous Detection and Segmentation. In Proc.
ECCV, 2014.
[14] B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Proc.
NIPS, 1993.
[15] J. R. Hershey, Z. Chen, J. L. Roux, and S. Watanabe. Deep clustering: Discriminative embeddings for
segmentation and separation. arXiv preprint arXiv:1508.04306, 2015.
[16] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen,
T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views
of four research groups. IEEE Signal Processing Magazine, 2012.
[17] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
2006.
[18] S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift. In Proc. ICML, 2015.
[19] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Proc. NIPS, 2012.
[22] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 2015.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc. of IEEE, 1998.
[24] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal Brain Damage. In Proc. NIPS, 1989.
[25] C. D. Manning, P. Raghavan, and H. Sch?tze. Introduction to information retrieval, 2008.
[26] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 2014.
[27] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proc. NIPS,
2014.
[28] F. Tian, B. Gao, Q. Cui, E. Chen, and T.-Y. Liu. Learning deep representations for graph clustering. In
Proc. AAAI, 2014.
[29] R. Tibshirani. Regression shrinkage and selection via the lasso. J. of the Royal Statistical Society, 1996.
[30] A. N. Tikhonov. On the stability of inverse problems. USSR Academy of Sciences, 1943.
[31] G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. Schuller. A deep semi-nmf model for learning hidden
representations. In Proc. ICML, 2014.
[32] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using dropconnect.
Proc. ICML, 2013.
[33] Z. Wang, S. Chang, J. Zhou, and T. S. Huang. Learning a task-specific deep architecture for clustering.
arXiv preprint arXiv:1509.00151, 2015.
[34] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200.
Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
[35] J. Yang, D. Parikh, and D. Batra. Joint unsupervised learning of deep representations and image clusters.
arXiv preprint arXiv:1604.03628, 2016.
[36] N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection.
In Proc. ECCV, 2014.
9
| 6263 |@word trial:2 compression:3 norm:1 nd:4 propagate:1 decomposition:1 sgd:1 tr:1 shot:7 initial:1 liu:1 contains:2 score:1 denoting:1 document:1 outperforms:1 current:5 com:1 comparing:1 guadarrama:1 activation:3 assigning:1 partition:1 shape:2 enables:1 remove:1 drop:1 seeding:1 update:10 hash:1 cue:1 selected:1 website:1 intelligence:1 beginning:1 colored:1 recompute:1 provides:1 toronto:1 location:1 org:1 zhang:3 height:2 along:2 dn:1 direct:1 become:1 lathauwer:1 ik:1 consists:2 combine:2 introduce:2 manner:1 multi:1 brain:2 salakhutdinov:2 relying:1 unfolded:1 cache:1 unrolling:1 conv:8 spain:1 provided:4 notation:2 project:1 what:1 argmin:1 interpreted:2 quantitative:1 every:5 act:1 tie:1 classifier:1 k2:2 control:1 unit:2 sje:1 yn:3 thereon:1 positive:1 before:1 local:2 consequence:1 despite:1 encoding:1 id:1 chose:2 bird:5 initialization:2 suggests:1 co:8 branson:1 ease:2 factorization:2 aschwing:1 adoption:1 averaged:2 tian:1 acknowledgment:1 responsible:1 lecun:4 testing:2 practice:3 differs:1 procedure:3 unfold:1 area:2 significantly:3 thought:1 word:1 pre:1 interior:2 selection:1 operator:5 context:1 applying:3 optimize:4 map:3 demonstrated:2 center:30 quick:1 go:1 latest:1 lowlevel:1 sainath:1 toronto1:1 formulate:2 simplicity:1 roux:1 adjusts:1 regarded:1 importantly:1 embedding:5 stability:1 cifar100:13 hierarchy:4 construction:1 magazine:1 exact:1 us:1 goodfellow:1 jaitly:1 trick:1 trend:1 element:1 recognition:4 expensive:1 bottom:2 preprint:7 wang:2 capture:5 initializing:1 region:3 compressing:3 connected:12 solla:1 removed:1 complexity:1 schiele:1 warde:1 d2i:1 trained:7 surgeon:1 compactly:1 easily:2 joint:1 various:2 regularizer:2 stacked:1 train:4 walter:1 fast:1 doi:2 zemel:1 hyper:1 caffe:5 larger:1 valued:1 cvpr:2 encoder:2 unseen:3 think:1 jointly:4 final:1 online:5 advantage:1 karayev:1 sequence:2 net:8 arbel:1 propose:2 reconstruction:4 product:1 adaptation:1 frequent:1 degenerate:1 achieve:3 academy:1 normalize:1 exploiting:1 sutskever:3 cluster:58 darrell:3 zp:2 cropping:1 categorization:2 bousmalis:1 object:1 tk:4 derive:1 recurrent:2 illustrate:1 pose:1 develop:1 nearest:1 ij:3 eq:1 c:1 involves:1 differ:1 annotated:1 attribute:5 filter:1 subsequently:1 cnns:1 centered:1 human:1 raghavan:1 require:1 government:2 assign:1 fix:1 generalization:10 clustered:1 multilinear:2 d16pc00003:1 ground:2 seed:1 algorithmic:2 visualize:2 stabilizes:2 major:1 vary:1 early:1 cub:8 smallest:1 adopt:1 purpose:1 estimation:2 proc:18 applicable:1 schroff:1 label:5 moor:1 hoffman:1 unfolding:7 aim:4 rather:2 zhou:1 resized:1 shrinkage:1 varying:1 wilson:1 encode:2 derived:1 focus:3 improvement:1 notational:1 rank:2 indicates:2 contrast:3 baseline:6 perronnin:1 hidden:2 perona:1 reproduce:1 pixel:10 issue:1 overall:2 classification:10 flexible:1 aforementioned:1 denoted:1 ussr:1 favored:1 among:1 spatial:16 smoothing:1 softmax:2 mutual:2 cube:1 field:6 once:1 equal:1 having:1 tzeng:1 sampling:1 identical:1 represents:1 yu:1 unsupervised:8 icml:3 ibc:2 future:1 report:6 mirza:1 quantitatively:1 richard:1 employ:2 randomly:1 cns:1 detection:2 investigate:6 evaluation:2 farley:1 copyright:1 regularizers:1 kt:2 edge:1 encourage:2 cifar10:13 arthur:1 conduct:1 old:1 re:2 sacrificing:1 girshick:4 minimal:1 column:2 soft:1 modeling:1 d2n:1 zn:10 assignment:3 deviation:3 subset:1 entry:1 ri1:1 krizhevsky:3 welinder:1 vandewalle:1 too:1 reported:1 connect:1 trigeorgis:1 chw:6 siam:1 probabilistic:1 contract:1 lee:1 together:1 quickly:1 squared:1 again:1 aaai:1 choose:1 wan:1 huang:1 dropconnect:1 derivative:1 style:2 return:1 szegedy:1 distribute:1 de:3 singleton:1 diversity:1 coding:3 summarized:1 explicitly:2 performed:1 later:2 view:2 dally:1 linked:1 overfits:1 red:1 annotation:2 jia:2 disclaimer:1 publicly:2 accuracy:7 convolutional:15 n1k:1 largely:1 hariharan:1 simultaneous:1 reach:1 suffers:1 unrolls:1 beak:1 against:1 nonetheless:1 involved:1 mohamed:1 dataset:12 popular:1 color:1 improves:1 dimensionality:1 segmentation:3 akata:2 back:2 higher:1 hashing:1 supervised:2 follow:3 hershey:1 formulation:2 done:1 though:1 box:2 decorrelating:1 furthermore:2 correlation:1 autoencoders:1 qzn:2 trust:1 propagation:1 mode:7 logistic:1 quality:1 concept:4 normalized:2 former:1 regularization:24 assigned:6 kxn:1 semantic:4 i2:1 during:1 width:2 encourages:4 covering:1 outline:1 demonstrate:4 percent:1 image:22 wise:1 meaning:3 novel:1 recently:1 possessing:2 parikh:1 superior:1 common:1 discourages:1 stork:1 belong:1 significant:1 refer:4 measurement:1 imposing:1 smoothness:1 tuning:2 rd:3 dbn:2 similarly:1 illinois:2 language:1 moving:1 access:1 han:1 etc:1 add:2 base:1 closest:1 recent:4 optimizes:1 belongs:1 scenario:1 tikhonov:1 n00014:1 nvidia:1 top1:1 onr:1 success:1 continue:1 posthoc:1 exploited:2 devise:1 caltech:2 seen:1 additional:1 employed:1 prune:1 deng:1 period:1 signal:1 ale:1 ii:2 multiple:3 harchaoui:1 semi:1 infer:1 smooth:3 technical:1 adapt:1 ahmed:1 cross:2 long:1 cifar:2 retrieval:1 impact:1 variant:1 regression:2 ae:2 vision:1 heterogeneous:1 arxiv:14 iteration:6 normalization:2 robotics:1 achieved:1 addition:2 uninformative:1 fine:9 huffman:1 singular:1 sch:1 biased:1 facilitates:1 member:1 spirit:1 effectiveness:2 integer:1 chest:1 noting:1 yang:1 canadian:1 intermediate:4 easy:3 split:4 embeddings:2 variety:3 cogswell:1 fit:3 zi:2 pratt:1 architecture:5 lasso:1 idea:1 simplifies:1 haffner:1 shift:1 whether:1 vassilvitskii:1 accelerating:1 penalty:1 speech:2 hessian:1 deep:21 detailed:2 tune:2 nonparametric:1 backpropagated:1 visualized:1 category:7 reduced:1 http:1 outperform:1 governmental:1 per:1 tibshirani:1 broadly:1 discrete:1 group:2 key:1 four:1 terminology:1 prevent:3 dahl:1 utilize:1 backward:1 graph:1 fraction:1 year:1 run:1 inverse:1 raquel:1 soda:1 reader:1 utilizes:1 parsimonious:11 separation:1 endorsement:1 appendix:3 resize:1 dropout:6 layer:53 followed:1 courville:1 activity:1 constraint:1 alex:6 scene:1 software:1 speed:1 extremely:1 min:1 structured:1 department:1 combination:1 representable:1 manning:1 belonging:2 cui:1 across:1 slightly:1 smaller:1 nmi:2 unregularized:1 resource:1 visualization:7 differentiates:1 drastic:1 end:7 available:4 apply:4 observe:4 denker:1 spectral:1 generic:1 batch:8 alternative:1 weinberger:1 original:1 top:6 clustering:93 include:1 remaining:3 zeiler:1 zemel1:1 exploit:1 especially:1 society:1 tensor:8 objective:12 implied:1 malik:1 parametric:2 receptive:6 strategy:1 damage:1 gradient:10 win:1 iclr:3 convnet:2 distance:5 capacity:1 concatenation:1 restart:1 nx:1 decoder:1 agglomerative:1 bengio:3 urtasun:1 code:1 index:2 reed:1 mini:3 illustration:1 unrolled:2 nc:1 negative:2 implementation:1 policy:1 perform:1 datasets:4 urbana:1 supporting:2 truncated:1 hinton:6 rn:2 ucsd:2 nmf:1 plan:1 imagenet:3 hanson:1 wah:1 acoustic:1 california:1 learned:11 herein:1 tensorflow:3 barcelona:1 renjie:1 nip:6 able:1 bar:1 proceeds:1 pattern:1 xm:1 interpretability:4 including:3 max:2 belief:1 royal:1 natural:1 ranked:1 business:1 residual:1 advanced:2 schuller:1 representing:1 older:1 improve:1 github:1 technology:1 reprint:1 auto:1 autoencoder:5 extract:2 schmid:1 text:1 epoch:2 loss:7 fully:12 expect:2 analogy:1 validation:4 shelhamer:1 vanhoucke:1 tyree:1 tiny:1 share:2 translation:1 row:4 eccv:2 repeat:1 last:2 keeping:1 free:1 supported:1 bias:1 senior:1 institute:2 taking:1 sparse:3 distributed:1 overcome:1 dimension:6 depth:1 xn:9 plain:1 evaluating:1 zafeiriou:1 preventing:1 forward:2 author:1 nguyen:1 approximate:1 compact:2 pruning:2 overfitting:4 incoming:1 ioffe:1 belongie:1 discriminative:1 fergus:1 latent:2 table:10 learn:4 channel:8 nature:1 alg:2 bottou:1 necessarily:1 zitnick:1 official:1 did:1 bounding:1 hyperparameters:1 iarpa:2 facilitating:2 augmented:1 fig:4 benchmarking:1 hassibi:1 momentum:2 mao:1 watanabe:1 exponential:1 jmlr:1 grained:6 hw:4 donahue:3 specific:1 covariate:1 showing:1 decay:3 svm:1 mnist:3 quantization:1 importance:1 decaf:4 magnitude:2 notwithstanding:1 occurring:1 margin:2 nk:2 chen:5 authorized:1 led:1 fc:6 tze:1 explore:1 likely:1 forming:1 ez:1 visual:2 gao:1 drifted:1 expressed:2 contained:1 vinyals:2 partially:1 pretrained:2 chang:1 applies:1 srivastava:1 corresponds:1 truth:2 extracted:1 goal:1 formulated:1 sized:2 kmeans:4 careful:1 towards:1 maxout:2 shared:3 replace:2 change:1 specifically:3 except:3 reducing:3 total:1 batra:2 pas:3 meaningful:1 indicating:1 formally:1 internal:1 support:1 alexander:1 categorize:1 evaluate:1 mita:1 correlated:2 |
5,818 | 6,264 | Dialog-based Language Learning
Jason Weston
Facebook AI Research,
New York.
[email protected]
Abstract
A long-term goal of machine learning research is to build an intelligent dialog
agent. Most research in natural language understanding has focused on learning
from fixed training sets of labeled data, with supervision either at the word level
(tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study
dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this
setup in two domains: the bAbI dataset of [23] and large-scale question answering
from [3]. We evaluate a set of baseline learning strategies on these tasks, and show
that a novel model incorporating predictive lookahead is a promising approach for
learning from a teacher?s response. In particular, a surprising result is that it can
learn to answer questions correctly without any reward-based supervision at all.
1
Introduction
Many of machine learning?s successes have come from supervised learning, which typically involves
employing annotators to label large quantities of data per task. However, humans can learn by acting
and learning from the consequences of (i.e, the feedback from) their actions. When humans act in
dialogs (i.e., make speech utterances) the feedback is from other human?s responses, which hence
contain very rich information. This is perhaps most pronounced in a student/teacher scenario where
the teacher provides positive feedback for successful communication and corrections for unsuccessful ones [8, 22]. However, in general any reply from a dialog partner, teacher or not, is likely to
contain an informative training signal for learning how to use language in subsequent conversations.
In this paper we explore whether we can train machine learning models to learn from dialogs. The
ultimate goal is to be able to develop an intelligent dialog agent that can learn while conducting conversations. To do that it needs to learn from feedback that is supplied as natural language. However,
most machine learning tasks in the natural language processing literature are not of this form: they
are either hand labeled at the word level (part of speech tagging, named entity recognition), segment
(chunking) or sentence level (question answering) by labelers. Subsequently, learning algorithms
have been developed to learn from that kind of supervision. We therefore need to develop evaluation
datasets for the dialog-based language learning setting, as well as developing models and algorithms
able to learn in such a regime.
The contribution of the present work is thus:
? We introduce a set of tasks that model natural feedback from a teacher and hence assess the
feasibility of dialog-based language learning.
? We evaluate some baseline models on this data, comparing to standard supervised learning.
? We introduce a novel forward prediction model, whereby the learner tries to predict the
teacher?s replies to its actions, yielding promising results, even with no reward signal at all.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
2
Related Work
In human language learning the usefulness of social interaction and natural infant directed conversations is emphasized, see e.g. the review paper [6], although the usefulness of feedback for learning
grammar is disputed [10]. Support for the usefulness of feedback is found however in second language learning [1] and learning by students [4, 8, 22].
In machine learning, one line of research has focused on supervised learning from dialogs using
neural models [18, 3]. Question answering given either a database of knowledge [3] or short stories
[23] can be considered as a simple case of dialog which is easy to evaluate. Those tasks typically
do not consider feedback. There is work on the the use of feedback and dialog for learning, notably
for collecting knowledge to answer questions [5, 14], the use of natural language instruction for
learning symbolic rules [7] and the use of binary feedback (rewards) for learning parsers [2].
Another setting which uses feedback is the setting of reinforcement learning, see e.g. [16] for a
summary of its use in dialog. However, those approaches often consider reward as the feedback
model rather than exploiting the dialog feedback per se. Nevertheless, reinforcement learning ideas
have been used to good effect for other tasks as well, such as understanding text adventure games
[12], machine translation and summarization [15]. Recently, [11] also proposed a reward-based
learning framework for learning how to learn.
Finally, forward prediction models, which we make use of in this work, have been used for learning eye tracking [17], controlling robot arms [9] and vehicles [21], and action-conditional video
prediction in atari games [13]. We are not aware of their use thus far for dialog.
3
Dialog-Based Supervision Tasks
Dialog-based supervision comes in many forms. As far as we are aware it is a currently unsolved
problem which type of learning strategy will work in which setting. In this section we therefore
identify different modes of dialog-based supervision, and build a learning problem for each. The
goal is to then evaluate learners on each type of supervision.
We thus begin by selecting two existing datasets: (i) the single supporting fact problem from the
bAbI datasets [23] which consists of short stories from a simulated world followed by questions;
and (ii) the MovieQA dataset [3] which is a large-scale dataset (? 100k questions over ? 75k
entities) based on questions with answers in the open movie database (OMDb). For each dataset
we then consider ten modes of dialog-based supervision. The supervision modes are summarized
in Fig. 1 using a snippet of the bAbI dataset as an example. The same setups are also used for
MovieQA, some examples of which are given in Fig 2. We now describe the supervision setups.
Imitating an Expert Student In Task 1 the dialogs take place between a teacher and an expert
student who gives semantically coherent answers. Hence, the task is for the learner to imitate that
expert student, and become an expert themselves. For example, imagine the real-world scenario
where a child observes their two parents talking to each other, it can learn but it is not actually
taking part in the conversation. Note that our main goal in this paper is to examine how a non-expert
can learn to improve its dialog skills while conversing. The rest of our tasks will hence concentrate
on that goal. This task can be seen as a natural baseline for the rest of our tasks given the same input
dialogs and questions.
Positive and Negative Feedback In Task 2, when the learner answers a question the teacher then
replies with either positive or negative feedback. In our experiments the subsequent responses are
variants of ?No, that?s incorrect? or ?Yes, that?s right?. In the datasets we build there are 6 templates
for positive feedback and 6 templates for negative feedback, e.g. ?Sorry, that?s not it.?, ?Wrong?,
etc. To separate the notion of positive from negative (otherwise the signal is just words with no
notion that yes is better than no) we assume an additional external reward signal that is not part
of the text. As shown in Fig. 1 Task 2, (+) denotes positive reward external to the dialog (e.g.
feedback provided by another medium, such as a nod of the head from the teacher). This is provided
with every positive response. Note the difference in supervision compared to Task 1: there every
answer is right and provides positive supervision. Here, only the answers the learner got correct
have positive supervision. This could clearly be a problem when the learner is unskilled: it will
supply incorrect answers and never (or hardly ever) receive positive responses.
2
Figure 1: Sample dialogs with differing supervision signals (tasks 1 to 10). In each case the same
example story is given for simplicity (shown first). Black text is spoken by the teacher, red text
denotes responses by the learner, blue text is provided by an expert student (which the learner can
imitate), (+) denotes positive reward external to the dialog (e.g. feedback provided by another
medium, such as a nod of the head from the teacher).
Story Context (same for all examples below)
Mary went to the hallway.
John moved to the bathroom.
Mary travelled to the kitchen.
Task 1: Imitating an Expert Student
Where is Mary? A:kitchen
Where is John? A:bathroom
Task 2: Positive and Negative Feedback
Where is Mary? A:playground
No, that?s incorrect.
Where is John? A:bathroom
Yes, that?s right! (+)
Task 3: Answers Supplied by Teacher
Where is Mary? A:bedroom
No, the answer is kitchen.
Where is John? A:bathroom
Correct! (+)
Task 4: Hints Supplied by Teacher
Where is Mary? A:bathroom
No, they are downstairs.
Where is John? A:kitchen
No, they are upstairs.
Task 5: Supporting Facts Supplied by Teacher
Where is Mary? A:kitchen
Yes, that?s right! (+)
Where is John? A:hallway
No, because John moved to the bathroom.
Task 6: Partial Feedback
Where is Mary? A:kitchen
Yes, that?s right!
Where is John? A:bathroom
Yes, that?s correct! (+)
Task 7: No Feedback
Where is Mary? A:kitchen
Yes, that?s right!
Where is John? A:bathroom
Yes, that?s correct!
Task 8: Imitation and Feedback Mixture
Where is Mary? A:kitchen
Where is John? A:bathroom
That?s right! (+)
Task 9: Asking For Corrections
Where is Mary? A:kitchen
Yes, that?s right! (+)
Where is John? A:hallway
No, that?s not right. A:Can you help me?
Bathroom.
Task 10: Asking For Supporting Facts
Where is Mary? A:kitchen
Yes, that?s right! (+)
Where is John? A:hallway
No, that?s not right. A:Can you help me?
A relevant fact is John moved to the bathroom.
Figure 2: Samples from the MovieQA dataset [3]. In our experiments we consider 10 different
language learning setups as described in Figure 1 and Sec. 3. The examples given here are for tasks
2 and 3, questions are in black and answers in red, and (+) indicates receiving positive reward.
Task 2: Positive and Negative Feedback
What movies are about open source? Revolution OS
That?s right! (+)
What movies did Darren McGavin star in? Carmen
Sorry, that?s not it.
Who directed the film White Elephant? M. Curtiz
No, that is incorrect.
Task 3: Answers Supplied by Teacher
What films are about Hawaii? 50 First Dates
Correct! (+)
Who acted in Licence to Kill? Billy Madison
No, the answer is Timothy Dalton.
What genre is Saratoga Trunk in? Drama
Yes! (+)
Answers Supplied by Teacher In Task 3 the teacher gives positive and negative feedback as in
Task 2, however when the learner?s answer is incorrect, the teacher also responds with the correction.
For example if ?where is Mary?? is answered with the incorrect answer ?bedroom? the teacher
responds ?No, the answer is kitchen??, see Fig. 1 Task 3. If the learner knows how to use this extra
information, it effectively has as much supervision signal as with Task 1, and much more than for
Task 2.
Hints Supplied by Teacher In Task 4, the corrections provided by the teacher do not provide
the exact answer as in Task 3, but only a useful hint. This setting is meant to mimic the real life
occurrence of being provided only partial information about what you did wrong. In our datasets
3
we do this by providing the class of the correct answer, e.g. ?No, they are downstairs? if the answer
should be kitchen, or ?No, it is a director? for the question ?Who directed Monsters, Inc.?? (using
OMDB metadata). The supervision signal here is hence somewhere in between Task 2 and 3.
Supporting Facts Supplied by Teacher In Task 5, another way of providing partial supervision
for an incorrect answer is explored. Here, the teacher gives a reason (explanation) why the answer
is wrong by referring to a known fact that supports the true answer that the incorrect answer may
contradict. For example ?No, because John moved to the bathroom? for an incorrect answer to
?Where is John??, see Fig. 1 Task 5. This is related to what is termed strong supervision in [23]
where supporting facts and answers are given for question answering tasks.
Partial Feedback Task 6 considers the case where external rewards are only given some of (50%
of) the time for correct answers, the setting is otherwise identical to Task 3. This attempts to mimic
the realistic situation of some learning being more closely supervised (a teacher rewarding you for
getting some answers right) whereas other dialogs have less supervision (no external rewards). The
task attempts to assess the impact of such partial supervision.
No Feedback In Task 7 external rewards are not given at all, only text, but is otherwise identical to
Tasks 3 and 6. This task explores whether it is actually possible to learn how to answer at all in such
a setting. We find in our experiments the answer is surprisingly yes, at least in some conditions.
Imitation and Feedback Mixture Task 8 combines Tasks 1 and 2. The goal is to see if a learner
can learn successfully from both forms of supervision at once. This mimics a child both observing
pairs of experts talking (Task 1) while also trying to talk (Task 2).
Asking For Corrections Another natural way of collecting supervision is for the learner to ask
questions of the teacher about what it has done wrong. Task 9 tests one of the most simple instances,
where asking ?Can you help me?? when wrong obtains from the teacher the correct answer. This is
thus related to the supervision in Task 3 except the learner must first ask for help in the dialog. This
is potentially harder for a model as the relevant information is spread over a larger context.
Asking for Supporting Facts Finally, in Task 10, a second less direct form of supervision for the
learner after asking for help is to receive a hint rather than the correct answer, such as ?A relevant
fact is John moved to the bathroom? when asking ?Can you help me??, see Fig. 1 Task 10. This is
thus related to the supervision in Task 5 except the learner must request help.
In our experiments we constructed the ten supervision tasks for the two datasets which are all available for download at http://fb.ai/babi. They were built in the following way: for each task we
consider a fixed policy1 for performing actions (answering questions) which gets questions correct
with probability ?acc (i.e. the chance of getting the red text correct in Figs. 1 and 2). We thus
can compare different learning algorithms for each task over different values of ?acc (0.5, 0.1 and
0.01). In all cases a training, validation and test set is provided. For the bAbI dataset this consists of
1000, 100 and 1000 questions respectively per task, and for movieQA there are ? 96k, ? 10k and
? 10k respectively. MovieQA also includes a knowledge base (KB) of ? 85k facts from OMDB,
the memory network model we employ uses inverted index retrieval based on the question to form
relevant memories from this set, see [3] for more details. Note that because the policies are fixed the
experiments in this paper are not in a reinforcement learning setting.
4
Learning Models
Our main goal is to explore training strategies that can execute dialog-based language learning. To
this end we evaluate four possible strategies: imitation learning, reward-based imitation, forward
prediction, and a combination of reward-based imitation and forward prediction. We will subsequently describe each in turn.
We test all of these approaches with the same model architecture: an end-to-end memory network
(MemN2N) [20]. Memory networks are a recently introduced model that have been shown to do
1
Since the policy is fixed and actually does not depend on the model being learnt, one could also think of it
as coming from another agent (or the same agent in the past) which in either case is an imperfect expert.
4
Figure 3: Architectures for (reward-based) imitation and forward prediction.
Answer
(action taken)
Supervision
(direct or
reward-based)
Output
?
Candidate(
Answers(
Output
Predict
Response to
Answer
read
addressin
g
m
read
m
addressin
g
Memory
Module
m
g
Memory vectors
Input
g
addressin
g
q
Internal state
Vector (initially: query)
Memory vectors
(a) Model for (reward-based) imitation learning
Controller
module
read
m
addressin
q
addressin
Memory
Module
Controller
module
read
read
Input
q
Internal state
Vector (initially: query)
(b) Model for forward prediction
well on a number of text understanding tasks, including question answering, dialog [3] and language
modeling [20]. In particular, they outperform LSTMs and other baselines on the bAbI datasets [23]
which we employ with dialog-based learning modifications in Sec. 3. They are hence a natural
baseline model for us to use in order to explore differing modes of learning in our setup. In the
following we will first review memory networks, detailing the explicit choices of architecture we
made, and then show how they can be modified and applied to our setting of dialog-based language
learning.
Memory Networks A high-level description of the memory network architecture we use is given
in Fig. 3 (a). The input is the last utterance of the dialog, x, as well as a set of memories (context)
(c1 , . . . , cN ) which can encode both short-term memory, e.g. recent previous utterances and replies,
and long-term memories, e.g. facts that could be useful for answering questions. The context inputs
ci are converted into vectors mi via embeddings and are stored in the memory. The goal is to
produce an output a
? by processing the input x and using that to address and read from the memory,
m, possibly multiple times, in order to form a coherent reply. In the figure the memory is read twice,
which is termed multiple ?hops? of attention.
In the first step, the input x is embedded using a matrix A of size d ? V where d is the embedding
dimension and V is the size of the vocabulary, giving q = Ax, where the input x is as a bag-ofwords vector. Each memory ci is embedded using the same matrix, giving mi = Aci . The output of
addressing and then reading from memory in the first hop is:
X
o1 =
p1i mi , p1i = Softmax(q > mi ).
i
Here, the match between the input and the memories is computed by taking the inner product followed by a softmax, yielding p1 , giving a probability vector over the memories. The goal is to select
memories relevant to the last utterance x, i.e. the most relevant have large values of p1i . The output
memory representation o1 is then constructed using the weighted sum of memories, i.e. weighted
by p1 . The memory output is then added to the original input, u1 = R1 (o1 + q), to form the new
state of the controller, where R1 is a d ? d rotation matrix2 . The attention over the memory can then
be repeated using u1 instead of q as the addressing vector, yielding:
X
o2 =
p2i mi , p2i = Softmax(u>
1 mi ),
i
The controller state is updated again with u2 = R2 (o2 + u1 ), where R2 is another d ? d matrix to
be learnt. In a two-hop model the final output is then defined as:
>
a
? = Softmax(u>
2 Ay1 , . . . , u2 AyC )
2
(1)
Optionally, different dictionaries can be used for inputs, memories and outputs instead of being shared.
5
where there are C candidate answers in y. In our experiments C is the set of actions that occur in
the training set for the bAbI tasks, and for MovieQA it is the set of words retrieved from the KB.
Having described the basic architecture, we now detail the possible training strategies we can employ
for our tasks.
Imitation Learning This approach involves simply imitating one of the speakers in observed dialogs, which is essentially a supervised learning objective3 . This is the setting that most existing dialog learning, as well as question answer systems, employ for learning. Examples arrive as (x, c, a)
triples, where a is (assumed to be) a good response to the last utterance x given context c. In our
case, the whole memory network model defined above is trained using stochastic gradient descent
by minimizing a standard cross-entropy loss between a
? and the label a.
Reward-based Imitation If some actions are poor choices, then one does not want to repeat
them, that is we shouldn?t treat them as a supervised objective. In our setting positive reward is
only obtained immediately after (some of) the correct actions, or else is zero. A simple strategy is
thus to only apply imitation learning on the rewarded actions. The rest of the actions are simply
discarded from the training set. This strategy is derived naturally as the degenerate case one obtains
by applying policy gradient [24] in our setting where the policy is fixed (see end of Sec. 3). In more
complex settings (i.e. where actions that are made lead to long-term changes in the environment and
delayed rewards) applying reinforcement learning algorithms would be necessary, e.g. one could
still use policy gradient to train the MemN2N but applied to the model?s own policy.
Forward Prediction An alternative method of training is to perform forward prediction: the aim
is, given an utterance x from speaker 1 and an answer a by speaker 2 (i.e., the learner), to predict
x
?, the response to the answer from speaker 1. That is, in general to predict the changed state of the
world after action a, which in this case involves the new utterance x
?.
To learn from such data we propose the following modification to memory networks, also shown
in Fig. 3 (b): essentially we chop off the final output from the original network of Fig. 3 (a) and
replace it with some additional layers that compute the forward prediction. The first part of the
network remains exactly the same and only has access to input x and context c, just as before. The
computation up to u2 = R2 (o2 + u1 ) is thus exactly the same as before.
At this point we observe that the computation of the output in the original network, by scoring
candidate answers in eq. (1) looks similar to the addressing of memory. Our key idea is thus
to perform another ?hop? of attention but over the candidate answers rather than the memories.
Crucially, we also incorporate the information of which action (candidate) was actually selected in
the dialog (i.e. which one is a). After this ?hop?, the resulting state of the controller is then used to
do the forward prediction.
Concretely, we compute:
o3 =
X
p3i (Ayi + ? ? [a = yi ]),
p3i = Softmax(u>
2 Ayi ),
(2)
i
where ? ? is a d-dimensional vector, that is also learnt, that represents in the output o3 the action that
was actually selected. After obtaining o3 , the forward prediction is then computed as:
x
? = Softmax(u>
x1 , . . . , u >
xC? )
3 A?
3 A?
where u3 = R3 (o3 + u2 ). That is, it computes the scores of the possible responses to the answer a
over C? possible candidates. The mechanism in eq. (2) gives the model a way to compare the most
likely answers to x with the given answer a, which in terms of supervision we believe is critical. For
example in question answering if the given answer a is incorrect and the model can assign high pi to
the correct answer then the output o3 will contain a small amount of ? ? ; conversely, o3 has a large
amount of ? ? if a is correct. Thus, o3 informs the model of the likely response x
? from the teacher.
Training can then be performed using the cross-entropy loss between x
? and the label x
?, similar to
before. In the event of a large number of candidates C? we subsample the negatives, always keeping
x
? in the set. The set of answers y can also be similarly sampled, making the method highly scalable.
3
Imitation learning algorithms are not always strictly supervised algorithms, they can also depend on the
agent?s actions. That is not the setting we use here, where the task is to imitate one of the speakers in a dialog.
6
Table 1: Test accuracy (%) on the Single Supporting Fact bAbI dataset for various supervision
approachess (training with 1000 examples on each) and different policies ?acc . A task is successfully
passed if ? 95% accuracy is obtained (shown in blue).
Supervision Type
?acc =
1 - Imitating an Expert Student
2 - Positive and Negative Feedback
3 - Answers Supplied by Teacher
4 - Hints Supplied by Teacher
5 - Supporting Facts Supplied by Teacher
6 - Partial Feedback
7 - No Feedback
8 - Imitation + Feedback Mixture
9 - Asking For Corrections
10 - Asking For Supporting Facts
Number of completed tasks (? 95%)
MemN2N
imitation
learning
0.5 0.1 0.01
100 100 100
79
28
21
83
37
25
85
23
22
84
24
27
90
22
22
90
34
19
90
89
82
85
30
22
86
25
26
1
1
1
MemN2N
reward-based
imitation (RBI)
0.5 0.1 0.01
100 100 100
99
92
91
99
96
92
99
91
90
100 96
83
98
81
59
20
22
29
99
98
98
99
89
83
99
96
84
9
5
2
MemN2N
forward
prediction (FP)
0.5 0.1 0.01
23
30
29
93
54
30
99
96
99
97
99
66
98
99
100
100 100
99
100 98
99
28
64
67
23
15
21
23
30
48
5
5
4
MemN2N
RBI + FP
0.5 0.1 0.01
99
99
100
99
92
96
99 100
98
99 100 100
100 99
100
99 100
99
98
99
99
99
98
97
95
90
84
97
95
91
10
8
8
A major benefit of this particular architectural design for forward prediction is that after training
with the forward prediction criterion, at test time one can ?chop off? the top again of the model to
retrieve the original memory network model of Fig. 3 (a). One can thus use it to predict answers a
?
given only x and c. We can thus evaluate its performance directly for that goal as well.
Finally, and importantly, if the answer to the response x
? carries pertinent supervision information
for choosing a
?, as for example in many of the settings of Sec. 3 (and Fig. 1), then this will be
backpropagated through the model. This is simply not the case in the imitation, reward-shaping [19]
or reward-based imitation learning strategies which concentrate on the x, a pairs.
Reward-based Imitation + Forward Prediction As our reward-based imitation learning uses the
architecture of Fig. 3 (a), and forward prediction uses the same architecture but with the additional
layers of Fig 3 (b), we can learn jointly with both strategies. One simply shares the weights across
the two networks, and performs gradient steps for both criteria, one of each type per action. The
former makes use of the reward signal ? which when available is a very useful signal ? but fails to
use potential supervision feedback in the subsequent utterances, as described above. It also effectively ignores dialogs carrying no reward. Forward prediction in contrast makes use of dialog-based
feedback and can train without any reward. On the other hand not using rewards when available is a
serious handicap. Hence, the mixture of both strategies is a potentially powerful combination.
Table 2: Test accuracy (%) on the MovieQA dataset dataset for various supervision approaches.
Numbers in bold are the winners for that task and choice of ?acc .
Supervision Type
?acc =
1 - Imitating an Expert Student
2 - Positive and Negative Feedback
3 - Answers Supplied by Teacher
4 - Hints Supplied by Teacher
5 - Supporting Facts Supplied by Teacher
6 - Partial Feedback
7 - No Feedback
8 - Imitation + Feedback Mixture
9 - Asking For Corrections
10 - Asking For Supporting Facts
Mean Accuracy
5
MemN2N
imitation
learning
0.5 0.1 0.01
80 80
80
46 29
27
48 29
26
47 29
26
47 28
26
48 29
27
51 29
27
60 50
47
48 29
27
49 29
27
52 36
34
MemN2N
reward-based
imitation (RBI)
0.5 0.1 0.01
80 80
80
52 32
26
52 32
27
51 32
28
51 32
26
49 32
24
22 21
21
63 53
51
52 34
26
52 34
27
52 38
34
MemN2N
forward
prediction (FP)
0.5 0.1 0.01
24 23
24
48 34
24
60 57
58
58 58
42
43 44
33
60 58
58
60 53
58
46 31
23
67 52
44
51 44
35
52 45
40
MemN2N
RBI + FP
0.5 0.1 0.01
77 77
77
68 53
34
69 65
62
70 54
32
66 53
40
70 63
62
61 56
50
72 69
69
68 52
39
69 53
36
69 60
50
Experiments
We conducted experiments on the datasets described in Section 3. As described before, for each
task we consider a fixed policy for performing actions (answering questions) which gets questions
correct with probability ?acc . We can thus compare the different training strategies described in Sec.
4 over each task for different values of ?acc . Hyperparameters for all methods are optimized on the
validation sets. A summary of the results is reported in Table 1 for the bAbI dataset and Table 2 for
MovieQA. We observed the following results:
7
? Imitation learning, ignoring rewards, is a poor learning strategy when imitating inaccurate
answers, e.g. for ?acc < 0.5. For imitating an expert however (Task 1) it is hard to beat.
? Reward-based imitation (RBI) performs better when rewards are available, particularly in
Table 1, but also degrades when they are too sparse e.g. for ?acc = 0.01.
? Forward prediction (FP) is more robust and has stable performance at different levels of
?acc . However as it only predicts answers implicitly and does not make use of rewards
it is outperformed by RBI on several tasks, notably Tasks 1 and 8 (because it cannot do
supervised learning) and Task 2 (because it does not take advantage of positive rewards).
? FP makes use of dialog feedback in Tasks 3-5 whereas RBI does not. This explains why FP
does better with useful feedback (Tasks 3-5) than without (Task 2), whereas RBI cannot.
? Supplying full answers (Task 3) is more useful than hints (Task 4) but hints still help FP
more than just yes/no answers without extra information (Task 2).
? When positive feedback is sometimes missing (Task 6) RBI suffers especially in Table 1.
FP does not as it does not use this feedback.
? One of the most surprising results of our experiments is that FP performs well overall,
given that it does not use feedback, which we will attempt to explain subsequently. This is
particularly evident on Task 7 (no feedback) where RBI has no hope of succeeding as it has
no positive examples. FP on the other hand learns adequately.
? Tasks 9 and 10 are harder for FP as the question is not immediately before the feedback.
? Combining RBI and FP ameliorates the failings of each, yielding the best overall results.
One of the most interesting aspects of our results is that FP works at all without any rewards. In
Task 2 it does not even ?know? the difference between words like ?yes? or ??correct? vs. words
like ?wrong? or ?incorrect?, so why should it tend to predict actions that lead to a response like
?yes, that?s right?? This is because there is a natural coherence to predicting true answers that
leads to greater accuracy in forward prediction. That is, you cannot predict a ?right? or ?wrong?
response from the teacher if you don?t know what the right answer is. In our experiments our policies
?acc sample negative answers equally, which may make learning simpler. We thus conducted an
experiment on Task 2 (positive and negative feedback) of the bAbI dataset with a much more biased
policy: it is the same as ?acc = 0.5 except when the policy predicts incorrectly there is probability
0.5 of choosing a random guess as before, and 0.5 of choosing the fixed answer bathroom. In this
case the FP method obtains 68% accuracy showing the method still works in this regime, although
not as well as before.
6
Conclusion
We have presented a set of evaluation datasets and models for dialog-based language learning. The
ultimate goal of this line of research is to move towards a learner capable of talking to humans, such
that humans are able to effectively teach it during dialog. We believe the dialog-based language
learning approach we described is a small step towards that goal.
This paper only studies some restricted types of feedback, namely positive feedback and corrections
of various types. However, potentially any reply in a dialog can be seen as feedback, and should
be useful for learning. It should be studied if forward prediction, and the other approaches we
tried, work there too. Future work should also develop further evaluation methodologies to test
how the models we presented here, and new ones, work in those settings, e.g. in more complex
settings where actions that are made lead to long-term changes in the environment and delayed
rewards, i.e. extending to the reinforcement learning setting, and to full language generation. Finally,
dialog-based feedback could also be used as a medium to learn non-dialog based skills, e.g. natural
language dialog for completing visual or physical tasks.
Acknowledgments
We thank Arthur Szlam, Y-Lan Boureau, Marc?Aurelio Ranzato, Ronan Collobert, Michael Auli,
David Grangier, Alexander Miller, Sumit Chopra, Antoine Bordes and Leon Bottou for helpful
discussions and feedback, and the Facebook AI Research team in general for supporting this work.
8
References
[1] M. A. Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form.
English Language and Literature Studies, 1(2):61, 2011.
[2] J. Clarke, D. Goldwasser, M.-W. Chang, and D. Roth. Driving semantic parsing from the world?s response.
In Proceedings of computational natural language learning, 2010.
[3] J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. Evaluating
prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931, 2015.
[4] R. Higgins, P. Hartley, and A. Skelton. The conscientious consumer: Reconsidering the role of assessment
feedback in student learning. Studies in higher education, 27(1):53?64, 2002.
[5] B. Hixon, P. Clark, and H. Hajishirzi. Learning knowledge graphs for question answering through conversational dialog. In ACL, 2015.
[6] P. K. Kuhl. Early language acquisition: cracking the speech code. Nature reviews neuroscience, 5(11):
831?843, 2004.
[7] G. Kuhlmann, P. Stone, R. Mooney, and J. Shavlik. Guiding a reinforcement learner with natural language
advice: Initial results in robocup soccer. In AAAI-2004 workshop on supervisory control, 2004.
[8] A. S. Latham. Learning through feedback. Educational Leadership, 54(8):86?87, 1997.
[9] I. Lenz, R. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for model predictive control.
In Robotics Science and Systems (RSS), 2015.
[10] G. F. Marcus. Negative evidence in language acquisition. Cognition, 46(1):53?85, 1993.
[11] T. Mikolov, A. Joulin, and M. Baroni.
arXiv:1511.08130, 2015.
A roadmap towards machine intelligence.
arXiv preprint
[12] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep
reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
[13] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks
in atari games. In Advances in Neural Information Processing Systems, pages 2845?2853, 2015.
[14] A. Pappu and A. Rudnicky. Predicting tasks in goal-oriented spoken dialog systems using semantic
knowledge bases. In Proceedings of the SIGDIAL, pages 242?250, 2013.
[15] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks.
arXiv preprint arXiv:1511.06732, 2015.
[16] V. Rieser and O. Lemon. Reinforcement learning for adaptive dialogue systems. Springer Science &
Business Media, 2011.
[17] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(01n02):125?134, 1991.
[18] A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. A neural
network approach to context-sensitive generation of conversational responses. NAACL, 2015.
[19] P.-H. Su, D. Vandyke, M. Gasic, N. Mrksic, T.-H. Wen, and S. Young. Reward shaping with recurrent
neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint
arXiv:1508.03391, 2015.
[20] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431?2439, 2015.
[21] G. Wayne and L. Abbott. Hierarchical control using networks trained with higher-level forward models.
Neural computation, 2014.
[22] M. G. Werts, M. Wolery, A. Holcombe, and D. L. Gast. Instructive feedback: Review of parameters and
effects. Journal of Behavioral Education, 5(1):55?75, 1995.
[23] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards ai-complete question answering: a set of
prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
[24] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229?256, 1992.
9
| 6264 |@word open:2 instruction:1 r:1 crucially:1 tried:1 harder:2 carry:1 initial:1 score:1 selecting:1 past:1 existing:2 o2:3 com:1 comparing:1 surprising:2 must:2 parsing:2 john:16 realistic:2 subsequent:3 informative:1 ronan:1 pertinent:1 cracking:1 succeeding:1 v:1 infant:1 intelligence:1 selected:2 guess:1 sukhbaatar:1 imitate:3 hallway:4 short:3 leadership:1 supplying:1 provides:2 simpler:1 zhang:1 constructed:2 direct:2 become:1 supply:1 director:1 incorrect:11 consists:2 combine:1 behavioral:1 introduce:2 notably:2 huber:1 tagging:2 themselves:1 dialog:50 examine:1 p1:2 spain:1 begin:1 provided:7 medium:4 rieser:1 what:8 kind:2 atari:2 developed:1 narasimhan:1 rudnicky:1 differing:2 spoken:3 playground:1 every:2 collecting:2 act:1 saxena:1 zaremba:1 exactly:2 wrong:7 control:3 wayne:1 szlam:2 positive:23 before:7 treat:1 consequence:1 black:2 acl:1 twice:1 studied:1 conversely:1 hajishirzi:1 directed:3 acknowledgment:1 drama:1 pappu:1 got:1 word:6 symbolic:1 disputed:1 get:2 cannot:3 context:7 unskilled:1 applying:2 missing:1 roth:1 educational:1 attention:3 williams:1 focused:2 simplicity:1 immediately:2 rule:1 higgins:1 importantly:1 oh:1 retrieve:1 embedding:1 notion:2 updated:1 controlling:1 imagine:1 parser:1 target:1 exact:1 us:4 recognition:1 particularly:2 predicts:2 labeled:2 database:2 monster:1 observed:2 module:4 preprint:6 role:1 went:1 ranzato:2 observes:1 environment:2 nie:1 reward:38 trained:2 depend:2 carrying:1 segment:1 singh:1 predictive:2 dodge:1 learner:18 various:3 genre:1 talk:1 train:3 attitude:1 describe:2 query:2 artificial:1 choosing:3 film:2 larger:1 otherwise:3 elephant:1 grammar:1 think:1 jointly:1 final:2 advantage:1 sequence:1 propose:1 interaction:1 coming:1 product:1 relevant:6 combining:1 date:1 degenerate:1 lookahead:1 description:1 moved:5 pronounced:1 getting:2 exploiting:1 parent:1 r1:2 ayc:1 produce:1 extending:1 help:8 develop:3 informs:1 recurrent:2 barzilay:1 eq:2 strong:1 involves:3 come:2 nod:2 concentrate:2 closely:1 correct:16 hartley:1 holcombe:1 subsequently:3 movieqa:8 kb:2 human:7 stochastic:1 education:2 explains:1 assign:1 licence:1 strictly:1 correction:8 sordoni:1 considered:1 cognition:1 predict:7 driving:1 u3:1 dictionary:1 major:1 early:1 failing:1 lenz:1 baroni:1 outperformed:1 bag:1 label:3 currently:1 sensitive:1 dalton:1 successfully:2 weighted:2 hope:1 clearly:1 always:2 aim:1 modified:1 rather:3 encode:1 ax:1 derived:1 indicates:1 contrast:1 baseline:5 helpful:1 inaccurate:1 typically:2 initially:2 sorry:2 brockett:1 overall:2 softmax:6 aware:2 never:1 once:1 having:1 hop:5 identical:2 represents:1 look:1 mimic:3 future:1 connectionist:1 intelligent:2 hint:8 employ:4 serious:1 wen:1 oriented:1 delayed:2 n02:1 kitchen:12 attempt:3 detection:1 policy1:1 highly:1 evaluation:3 mixture:5 yielding:4 capable:1 partial:7 necessary:1 arthur:1 detailing:1 instance:1 modeling:1 asking:11 addressing:3 usefulness:3 successful:1 galley:1 conducted:2 sumit:1 too:2 stored:1 reported:1 answer:59 teacher:34 learnt:3 referring:1 explores:1 international:1 memn2n:10 lee:1 rewarding:1 receiving:1 off:2 michael:1 travelled:1 again:2 aaai:1 possibly:1 hawaii:1 external:6 expert:12 dialogue:2 toy:1 converted:1 potential:1 star:1 student:10 summarized:1 sec:5 includes:1 inc:1 bold:1 collobert:1 vehicle:1 try:1 jason:1 ayi:2 performed:1 observing:1 red:3 contribution:1 ass:2 robocup:1 accuracy:6 conducting:1 who:4 downstairs:2 miller:2 identify:1 yes:15 trajectory:1 mooney:1 acc:13 explain:1 suffers:1 facebook:2 reconsidering:1 acquisition:2 kuhlmann:1 naturally:2 mi:6 unsolved:1 sampled:1 dataset:12 ask:2 mitchell:1 conversation:5 knowledge:5 shaping:2 actually:5 higher:2 supervised:8 methodology:1 response:17 done:1 execute:1 just:3 reply:6 hand:3 babi:10 lstms:1 su:1 o:1 assessment:1 mode:4 quality:1 perhaps:1 believe:2 mary:13 supervisory:1 effect:2 naacl:1 contain:3 true:2 former:1 hence:7 adequately:1 read:7 semantic:2 white:1 during:2 game:4 chop:2 whereby:1 speaker:5 soccer:1 criterion:2 o3:7 trying:1 stone:1 evident:1 complete:1 latham:1 performs:3 adventure:1 novel:2 recently:2 rotation:1 physical:1 ji:1 winner:1 ai:4 similarly:1 grangier:1 language:26 robot:1 access:1 supervision:36 stable:1 etc:1 labelers:1 base:2 own:1 recent:1 retrieved:1 p1i:3 rewarded:1 scenario:2 termed:2 schmidhuber:1 binary:1 success:1 life:1 yi:1 scoring:1 inverted:1 seen:2 additional:3 bathroom:14 greater:1 signal:9 ii:1 multiple:2 full:2 match:1 cross:2 long:4 retrieval:1 equally:1 feasibility:1 impact:2 prediction:23 variant:1 basic:1 scalable:1 controller:5 essentially:2 ameliorates:1 arxiv:12 sometimes:1 robotics:1 c1:1 receive:2 whereas:3 want:1 else:1 source:1 conversing:1 extra:2 rest:3 biased:1 tend:1 chopra:4 easy:1 embeddings:1 knepper:1 rbi:11 bedroom:2 architecture:7 imperfect:1 idea:2 cn:1 inner:1 goldwasser:1 whether:2 ultimate:2 passed:1 speech:3 york:1 hardly:1 action:20 deep:3 useful:6 se:1 amount:2 backpropagated:1 ten:2 http:1 generate:1 supplied:14 outperform:1 neuroscience:1 correctly:1 per:4 blue:2 kill:1 key:1 four:1 nevertheless:1 lan:1 abbott:1 graph:1 sum:1 shouldn:1 noticing:1 you:8 powerful:1 named:1 place:1 arrive:1 architectural:1 matrix2:1 coherence:1 clarke:1 layer:2 handicap:1 completing:1 followed:2 lemon:1 occur:1 u1:4 answered:1 aspect:1 carmen:1 leon:1 performing:2 conversational:2 mikolov:2 acted:1 developing:1 deepmpc:1 request:1 combination:2 poor:2 across:1 modification:2 making:1 restricted:1 imitating:7 ay1:1 taken:1 chunking:1 remains:1 trunk:1 turn:1 r3:1 mechanism:1 know:3 end:8 available:4 prerequisite:2 apply:1 observe:1 kuhl:1 hierarchical:1 occurrence:1 alternative:1 original:4 denotes:3 top:1 completed:1 madison:1 xc:1 somewhere:1 giving:3 build:3 especially:1 objective:1 move:1 question:28 quantity:1 added:1 strategy:12 ofwords:1 degrades:1 responds:2 antoine:1 gradient:5 fovea:1 separate:1 thank:1 simulated:1 entity:2 me:4 partner:2 roadmap:1 considers:1 reason:1 marcus:1 consumer:1 code:1 o1:3 index:1 providing:2 minimizing:1 optionally:1 setup:5 potentially:3 teach:1 negative:13 design:1 summarization:1 policy:12 perform:2 datasets:9 discarded:1 snippet:1 descent:1 supporting:12 beat:1 situation:1 incorrectly:1 communication:2 head:2 ever:1 team:1 auli:3 download:1 introduced:1 david:1 pair:2 namely:1 sentence:2 optimized:1 coherent:2 learned:1 barcelona:1 nip:1 address:1 able:3 below:1 regime:2 reading:1 fp:15 built:1 unsuccessful:1 memory:33 video:2 explanation:1 including:1 critical:1 event:1 natural:13 business:1 predicting:2 arm:1 improve:1 movie:3 eye:1 metadata:1 utterance:8 speeding:1 text:9 review:4 understanding:4 literature:2 l2:1 dolan:1 embedded:2 loss:2 interesting:1 generation:2 billy:1 annotator:1 validation:2 triple:1 clark:1 agent:5 story:4 pi:1 share:1 translation:2 bordes:3 summary:2 changed:1 surprisingly:1 last:3 repeat:1 keeping:1 english:1 shavlik:1 template:2 taking:2 sparse:1 benefit:1 feedback:55 dimension:1 vocabulary:1 world:4 evaluating:1 rich:1 fb:2 computes:1 forward:22 made:3 reinforcement:9 concretely:1 ignores:1 adaptive:1 employing:1 far:2 social:1 skill:2 contradict:1 implicitly:2 obtains:3 assumed:1 fergus:1 imitation:23 aci:1 don:1 latent:1 why:3 table:6 promising:2 nature:1 learn:16 robust:1 ignoring:1 obtaining:1 bottou:1 complex:2 domain:1 marc:1 did:2 joulin:1 main:2 spread:1 aurelio:1 whole:1 subsample:1 hyperparameters:1 motivation:1 child:2 repeated:1 p2i:2 x1:1 fig:14 advice:1 fails:1 guiding:1 explicit:1 candidate:7 answering:12 learns:1 young:1 emphasized:1 revolution:1 showing:1 explored:1 r2:3 evidence:1 incorporating:1 workshop:1 effectively:3 ci:2 interactional:1 boureau:1 entropy:2 timothy:1 simply:4 likely:3 explore:3 gao:1 visual:1 tracking:1 talking:3 u2:4 chang:1 springer:1 darren:1 chance:1 lewis:1 weston:4 conditional:2 goal:13 towards:4 shared:1 replace:1 change:2 hard:1 except:3 semantically:1 acting:1 select:1 internal:2 support:2 guo:1 meant:1 alexander:1 kulkarni:1 incorporate:1 evaluate:6 instructive:1 |
5,819 | 6,265 | A Sparse Interactive Model for Matrix Completion
with Side Information
Jin Lu
Guannan Liang
Jiangwen Sun
Jinbo Bi
University of Connecticut
Storrs, CT 06269
{jin.lu, guannan.liang, jiangwen.sun, jinbo.bi}@uconn.edu
Abstract
Matrix completion methods can benefit from side information besides the partially observed matrix. The use of side features that describe the row and column
entities of a matrix has been shown to reduce the sample complexity for completing the matrix. We propose a novel sparse formulation that explicitly models the
interaction between the row and column side features to approximate the matrix
entries. Unlike early methods, this model does not require the low rank condition
on the model parameter matrix. We prove that when the side features span the
latent feature space of the matrix to be recovered, the number of observed entries
needed for an exact recovery is O(log N ) where N is the size of the matrix. If the
side features are corrupted latent features of the matrix with a small perturbation,
our method can achieve an -recovery with O(log N ) sample complexity. If side
information is useless, our method maintains a O(N 3/2 ) sampling rate similar to
classic methods. An efficient linearized Lagrangian algorithm is developed with
a convergence guarantee. Empirical results show that our approach outperforms
three state-of-the-art methods both in simulations and on real world datasets.
1
Introduction
Matrix completion has been a basis of many machine learning approaches for computer vision [6],
recommender systems [21, 24], signal processing [19, 27], and among many others. Classically,
low-rank matrix completion methods are based on matrix decomposition techniques which require
only the partially observed data in the matrix [15, 3, 14] by solving the following problem
minE kEk? , subject to R? (E) = R? (F),
(1)
where F ? Rm?n is the partially observed low-rank matrix (with a rank of r) that needs to be
recovered, ? ? {1, ? ? ? , m}?{1, ? ? ? , n} be the set of indexes where the corresponding components
in F are observed, the mapping R? (M): Rm?n ? Rm?n gives another matrix whose (i, j)-th
entry is Mi,j if (i, j) ? ? (or 0 otherwise), and kEk? computes the nuclear norm of E. Early
theoretical analysis [4, 5, 20] proves that O(N r log2 N ) entries are sufficient for an exact recovery
if the observed entries are uniformly sampled at random where N = max{n, m}.
Recent studies start to explore side information for matrix completion and factorization [1, 18, 7,
17, 8]. For example, to infer the missing ratings in a user-movie rating matrix, descriptors of the
users and movies are often known and may help to build a content-based recommender system. For
instance, kids tend to like cartoons, so the age of a user likely interacts with the cartoon feature of a
movie. When few ratings are known, this side information could be the main source for completing
the matrix. Although based on empirical studies, several works found that side features are helpful
[17, 18], those methods are based on non-convex matrix factorization formulations without any
theoretical guarantees. Three recent methods have focussed on convex nuclear-norm regularized
objectives, which leads to theoretical guarantees on matrix recovery [13, 28, 9, 16]. These methods
all construct an inductive model XT GY so that R? (XT GY) = R? (F) where the side matrices
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
X and Y consist of side features, respectively, for the row entities (e.g., users) and column entities
(e.g., movies) of a (rating) matrix. This inductive model has a parameter matrix G which is either
required to be low rank [13] or to have a minimal nuclear norm kGk? [28]. Recovering G of a
(usually) smaller size is argued to be easier than directly recovering the matrix F. With a very
strong assumption on ?perfect? side information, i.e., both X and Y are orthonormal matrices and
respectively in the latent column and row space of the matrix F, the method in [28] is proved to
require much reduced sample complexity O(log N ) for an exact recovery of F. Because most side
features X and Y are not perfect in practice, a very recent work [9] proposes to use a residual matrix
N to handle the noisy side features. This method constructs an inductive model XT GY + N to
approximate F and requires both G and N to be low rank, or have a low nuclear norm. It uses the
nuclear norm of the residual to quantify the usefulness of side information, and proves O(log N )
sampling rate for an -recovery when X and Y span the full latent feature space of F, and o(N )
sample complexity when X and Y contain corrupted latent features of F. An -recovery is defined
as that the expected discrepancy between the predicted matrix and the true matrix is less than an
arbitrarily small > 0 under a certain probability.
In this paper, we propose a new method for matrix recovery by constructing a sparse interactive
model XT GY to approximate F where G can be sparse but does not need to be low rank. The
(i, j)-th element of G determines the role of the interaction between the i-th feature of users and
the j-th feature of products. The low-rank property of F is commonly assumed to characterize the
observation that similar users tend to rate similar products similarly [4]. When using an inductive
approximation F = XT GY, rank(F) ? rank(G), so a low-rank requirement on G can be a sufficient condition on the low-rank condition of F. Previous relevant methods [13, 28, 9] all impose
the low-rank condition on G, which is however not a necessary condition for F to be low rank
(only becomes a necessary condition when X and Y are full rank). Given general side matrices
X ? Rd1 ?m and Y ? Rd2 ?n where the numbers of features d1 , d2 N , limiting the interactive
model of G ? Rd1 ?d2 to be low rank can be an over-restrictive constraint. In our model, we use
a low-rank matrix E to directly approximate F and then estimate E from the interactive model of
X and Y with a sparse regularizer on G. We show empirically that a low-rank F can be recovered
from a corresponding full (or high) rank G. Our contributions are summarized as follows:
(i) We propose a new formulation that estimates both E and G by imposing a nuclear-norm constraint on E but a general regularizer on G, e.g., the sparse regularizer kGk1 . The proposed model
has recovery guarantees depending on the quality of the side features: (1) when X and Y are full row
rank and span the entire latent feature space of F (but are not required to satisfy the much stronger
condition of being orthonormal as in [28]), O(log N ) observations are still sufficient for our method
to achieve an exact recovery of F. (2) When the side matrices are not full rank and corrupted from
the original latent features of F, i.e., X and Y do not contain enough basis to exactly recover F,
O(log N ) observed entries can be sufficient for an -recovery.
(ii) A new linearized alternating direction method of multipliers (LADMM) is developed to efficiently solve the proposed formulation. Existing methods that use side information are solved by
standard block-wise coordinate descent algorithms which have convergence guarantee to a global solution only when each block-wise subproblem has a unique solution [26]. Our LADMM has
stronger convergence property [29] and benefits from the linear convergence rate of ADMM [11, 23].
(iii) Prior methods focus on the recovery of F, and little light has been shed to understand whether
the interactive model of G can be retrieved. Because of the explicit use of E and G, our method
aims to directly recover both. The unique G in the case of exact recovery of F can be attained by
our algorithm. When G is not unique in the -recovery case, our algorithm converges to a point in
the optimal solution set.
2
The Proposed Interactive Model
To utilize the side information in X and Y to complete F, we consider to build a predictive model
from the observed components that predicts the missing ones. One can simply build a linear model:
f = xT u + yT v + g, where x and y are the feature vectors respectively for a user and a product,
and u, v and g are model parameters. In real life applications, interactive terms between the features
in X and Y can be very important. For example, male users tend to rate science fiction and action
movies higher than female, which can be informative when predicting their ratings. Therefore, a
linear model considering no interactive terms can be oversimple and have low predictive power for
missing entries. We hence add interactive terms by introducing an interaction matrix Hd1 ?d2 into
the predictive model, which can be written as: f = xT Hy + xT u + yT v + g. By defining
2
H u
the above model can be
vT g
? T G?
simplified to: f = x
y. The following optimization problem can be solved to obtain the model
parameter G.
? T GY
? = E, R? (E) = R? (F),
min g(G) + ?E kEk? ,
subject to X
? = [xT 1]T , y
? = [yT 1]T and G(a=d1 +1)?(b=d2 +1) =
x
G,E
? a?m and Y
? b?n are two matrices that are created by augwhere E is a completed version of F, X
menting one row of all ones to X and to Y, respectively, and g(G) and kEk? are used to incorporate
the (sparsity) prior of G and low rank prior of E. Because the side information data can be noisy
and not all the features and their interactions are helpful to the prediction of F, a sparse G is often
expected. Our implementation has used g(G) = kGk1 . It is natural to impose low rank requirement
on E because it is a completed version of a low rank matrix F. The tuning parameter ?E is used to
balance the two priors in the objective.
Without loss of generality and for convenience of notation, we simply use X and Y to denote the
augmented matrices. Denote the Frobenius norm of a matrix by || ? ||F . To account for Gaussian
noise, we relax the equality constraint XT GY = E and replace it by minimizing their squared
residual: kXT GY ? Ek2F and solve the following convex optimization problem to obtain G and E:
1 T
kX GY ? Ek2F + ?G g(G) + ?E kEk? ,
subject to R? (E) = R? (F). (2)
G,E
2
where ?G is another tuning parameter that together with ?E balances the three terms in the objective.
Especially, the regularizer g(?) in our theoretical analysis can take any general matrix norm that
satisfies kMk? ? Cg(M)), ?M, for a constant C, so for instance g(?) can be kGk1 , or ||G||F , or
kGk2 . Throughout this paper, the matrices X (and Y) refer to, i.e., either the original Xd1 ?m (and
? a?m (and Y
? b?n ) depending on the user-specified model.
Yd2 ?n ) or the augmented X
Our formulation (2) differs from existing methods that make use of side information for matrix
? that
completion in several ways. Existing methods [28, 13, 9] solve the problem by finding H
T
minimizes kHk? subject to R? (X HY) = R? (F), but we expand it to include the linear term
within the interactive model. The proposed model adds the flexibility to consider both linear and
quadratically interactive terms, and allows the algorithm to determine the terms that should be used
in the model by enforcing the sparsity in H (or G). Because E = XT GY, the rank of G bounds
that of E from above. The existing methods all control the rank of G (e.g. by minimizing kGk? ) to
incorporate the prior of low rank E (and thus low rank F) in their formulations. However, when the
rank of G is not properly chosen during the tuning of hyperparameters, it may not even be a sufficient
condition to ensure low rank E (if rank(E) the pre-specified rank(G)). It is easy to see that besides
G a low-rank X or Y can lead to a low-rank E as well. Enforcing a low-rank condition on H or G
may limit the search space of the interactive model and thus impair the prediction performance on
missing matrix entries, which are demonstrated in our empirical results. Moreover, one can observe
that when ?G is sufficiently large, Eq.(2) is reduced to the standard matrix completion problem (1)
without side information because G may be degenerated into a zero matrix, so our formulation is
applicable when no access to useful side information.
min
3
Recovery Analysis
Let E0 and G0 be the two matrices such that R? (F) = R? (E0 ) and E0 = XT G0 Y. In this
section, we give our theoretical results on the sample complexity for achieving an exact recovery of
E0 and G0 when X and Y are both full row rank (i.e., rank(X) = a and rank(Y) = b), and an
-recovery of E0 when the two side matrices are corrupted and less informative. The proofs of all
theorems are given in supplementary materials.
3.1
Sample Complexity for Exact Recovery
Before presenting our results, we give a few definitions. Let F = U?VT , XT = UX ?X VTX and
YT = UY ?Y VTY be the singular value decomposition of F, XT and YT , respectively, where all
? matrices are full rank, meaning that singular vectors corresponding to the singular value 0 are not
included in the respective U and V matrices. Let
PU = UUT ? Rm?m ,
PV = VVT ? Rn?n ,
T
m?m
PX = UX UTX = XT VX ??2
,
X VX X ? R
T
n?n
PY = UY UTY = YT VY ??2
,
Y VY Y ? R
3
where PU , PV , PX and PY project a vector onto the subspaces spanned, respectively, by the
columns in U, V and rows in X, and Y. For any matrix Mm?n that satisfies M = PX MPY ,
we define two linear operators: PT : Rm?n ? Rm?n and PT ? : Rm?n ? Rm?n as follows:
PT (M) =
PT ? (M) =
PU MPY + PX MPV ? PU MPV
(PX ? PU )M(PY ? PV ) = PX? MPY? .
Let ?0 and ?1 be the two coherence measures of F and be defined as follows as discussed in [4, 16]:
m
n
mn
?0 = max
max kPU ei k2 ,
max kPV ej k2 , ?1 = max
([UVT ]i,j )2 ,
i,j
r 1?i?m
r 1?j?n
r
where ei is the unit vector with the ith entry equal to 1. Let ?XY be the coherence measurement
between X and Y and be defined as:
!
nkyj k22
mkxi k22
.
?XY = max max
, max
1?j?n
1?i?m
a
b
With the above definitions, we show in the following theorem that when X and Y are both full row
rank, (G0 , E0 ) is the unique solution to Eq.(2) with high probability as long as there are O(r log N )
observed components in F. In other words, with a sampling rate of O(r log N ), our method can
fully recover both E0 and G0 with a high probability when X and Y are full row rank.
?1
1
Theorem 1 Let ? = max(?0 , ?XY ), ? = max(k??1
X k? , k?Y k? ), N = max(m, n), q0 = 2 (1 +
8p 2 2
2
log a ? log r), T0 = 128p
3 ?? max(?1 , ?)r(a + b) log N and T1 = 3 ? ? (ab + r ) log N , where p
is a constant. Assume T1 ? q0 T0 , X and Y are both full row rank. For any p > 1, with a probability
at least 1 ? 4(q0 + 1)N ?p+1 ? 2q0 N ?p+2 , (G0 , E0 ) is the unique optimizer to Problem (2) with
necessary sampling rate as few as O(r log N ). More precisely, the sampling size |?| should satisfy
that |?| ? 64p
3 ?? max(?1 , ?)(1 + log a ? log r)r(a + b) log N .
When r N and r = O(1), the sampling rate for the exact recovery of both E0 and G0 reduces to
O(log N ). A similar sampling rate for a full recovery of E0 has been developed in [28] where both
X and Y, however, need to be orthonormal matrices in their derivation. In Theorem 1, because ?
is mainly determined by the smallest singular values of the side information matrices, and sampling
rate increases when ? increases, it suggests that side information matrices of lower rank would require more observed F entries for a full recovery of F. An advanced model without the orthonormal
assumption has been given in [9], but exact recovery is not discussed. In our case, the two matrices
are only required to be full row rank. Moreover, the theoretical or empirical results in our work give
the first careful investigation on the recovery of both G0 and E0 .
3.2 Sample Complexity for -Recovery
The condition for full-rank side information matrices may not be satisfied in some cases to fully
recover E0 (or F). We analyze the error bound of our model and prove a reduced sample complexity
in comparison with standard matrix completion methods for an -recovery when the side information
matrices are not full row rank or their rank is difficult to attain.
Theorem 2 Denote kEk? ? ?, kGk1 ? ?, kXT GY ? EkF ? ? and the perfect side feature
matrices (containing latent features of F) are corrupted with ?X and ?Y where k?XkF ?
s1 , k?YkF ? s2 and S = max(s1 , s2 ). To -recover F that the
? expected loss E[l(f, F)] <
for a given arbitrarily small > 0, O(min((? 2 + ?2 ) log N, S 2 ? N )/2 ) observations are sufficient for our model when corrupted factors of side information are bounded.
Theorem 2 can be inferred from the fact that the trace norm of E and the `1 -norm of G affect
sample complexity of our model. It meets the intuition that higher rank matrix ought to require
more observations to recover. Besides, for the discovery of G, a sparse interactive matrix can lead
to the decrease of the sample complexity, which implies that the side information, even though when
it is not perfect, could be informative enough such that the original matrix can be compressed by
sparse coding via the estimated interaction between the features of row and column entities of the
matrix. Our empirical evaluations have confirmed the utility of even imperfect side features.
When the rank of the original data matrix r = O(1) (r N ), and correspondingly ? = O(1),
Theorem 2 points out that only O(log N ) sampling rate is required for an -recovery. The classic matrix completion analysis without side information shows that under certain conditions, one
4
can achieve O(N poly log N ) sample complexity for both perfect recovery [4] and -recovery [25],
which is higher than our complexity. However, the condition for these existing bounds is that the observed entries follow a certain distribution. Recent studies [22] found that if no specific distribution
is pre-assumed for observed entries, O(N 3/2 ) sampling rate is sufficient for an -recovery. Compared to those results, our analysis does not require any assumption on the distribution of observed
entries. When X and Y contain insufficient interaction information about F and kEk? = O(N ),
the sample complexity of our method increases to O(N 3/2 ) in the worst case, which means that our
model maintains the same complexity as the classic methods.
4
Adaptive LADMM Algorithm
In this section, we develop an adaptive LADMM algorithm [29] to solve problem (2). First, we show
that the ADMM is applicable in our problem and we then derive LADMM steps. A convergence
proof is established to guarantee the performance of our algorithm.
Because it requires separable blocks of variables in order to use ADMM, we first define C = E ?
XT GY and use it in Eq.(2). Then the augmented Lagrangian function of (2) is given by
1
L(E, G, C, M1 , M2 , ?) = kCk2F + ?E kEk? + ?G kGk1 + hM1 , R? (E ? F)i +
2
(3)
D
E ?
?
+ M2 , E ? XT GY ? C + kR? (E ? F)k2F + kE ? XT GY ? Ck2F
2
2
where M1 , M2 ? Rm?n are Lagrange multipliers and ? > 0 is the penalty parameter. Given Ck ,
Gk , Ek , Mk1 and Mk2 at iteration k, each group of the variables yields their respective subproblems:
Ck+1 = arg min L(Ek , Gk , Mk2 , C, ?k ),
C
Gk+1 = arg min L(Ek , G, Mk2 , Ck+1 , ?k ),
G
(4)
Ek+1 = arg min L(E, Gk+1 , Mk1 , Mk2 , Ck+1 , ?k ),
E
After solving these subproblems, we update the multipliers M1 and M2 as follows;
Mk+1
=Mk1 + ?k (R? (Ek+1 ? F)),
1
Mk+1
=Mk2 + ?k (Ek+1 ? XT Gk+1 Y ? Ck+1 ).
2
(5)
We focus on demonstrating the iterative steps of the adaptive LADMM. Given Ck , Gk Ek , Mk1
and Mk2 , Algorithm 1 describes how to obtain the next iterate (C, E, G, M1 , M2 ). A closed-form
solution has been derived for each subproblem in the supplementary material.
Algorithm 1 The adaptive LADMM algorithm to solve Ck , Gk , Ek , k = 1, ..., K
Input: X, Y and R? (F) with parameters ?G , ?E , ?A , ?B , ? and ?max .
Output:
C, G, E;
1: Initialize E0 , G0 , M01 , M02 . Compute A = Y T ? XT . k = 0,
repeat;
k
2: Ck+1 = ? ?+1
(Ek ? XT Gk Y + Mk2 /?k );
k
T
?G
k
k
k
k
3: Gk+1 = reshape(max(|gk ? f1k /?A | ? ?A
? , 0) sgn(g ? f1 /?A )) where f1 = A (Ag +
k
4:
5:
6:
7:
8:
ck ? bk1 ) = AT (Agk + ck ? ek ? mk2 /?k ) and e = vec(E), g = vec(G), m = vec(M),
c = vec(C).
Ek+1 = SV T (Ek ? (f2k + f3k )/(2?B ), ?E /2(?k ?B )) where f2k = R? (Ek ? F + Mk1 /?k );
f3k = Ek ? XT Gk+1 Y ? Ck + Mk2 /?k .
Mk+1
= Mk1 + ?k (R? (Ek+1 ? F)).
1
k+1
M2 = Mk2 + ?k (Ek+1 ? XT Gk+1 Y ? Ck+1 ).
?k+1 = min(?max , ??k ).
k = k + 1 until convergence;
Return C, G, E;
The adaptive parameter in Algorithm 1 is ? > 1, and ?max controls the upper bound of {?k }. The
operator reshape(g) converts a vector g ? Rab into a matrix G ? Ra?b , which is the inverse
5
operator of vec(G). The operator SV T (E, t) is the singular value thresholding process defined in
[3] for soft-thresholding the singular values of an arbitrary matrix E by a threshold t. The matrix
A = YT ? XT where ? indicates the Kronecker product. In the initialization step, M01 , M02 are
randomly drawn from the standard Gaussian distribution; we initialize E0 and G0 by the iterative
soft-thresholding algorithm [2] and SV T operator respectively.
The adaptive LADMM can effectively solve the proposed optimization problem in several aspects. First, the convergence of the commonly-used block-wise coordinate descent (BCD) method,
sometimes referred to as alternating minimization methods, requires typically that the optimization
problem be strictly convex (or quasiconvex but hemivariate). The strongest result for BCD so far
is established in [26] which requires the alternating subproblems to be optimized in each iteration
to its unique optimal solution. This requirement is often restrictive in practice. Our convex (but
not strictly convex) problem can be solved by the adaptive LADMM with the global convergence
guarantee which is characterized in Theorem 3. Second, two of the subproblems are non-smooth
due to the `1 -norm or the nuclear norm, so it can be difficult to obtain a closed-form formula to
efficiently compute a solution by standard optimization tools; however, adaptive LADMM utilizes
the linearization technique which leads to a closed-form solution for each linearized subproblem,
and significantly enhances the efficiency of the iterative process. Third, adaptive LADMM can be
practically parallelizable by a similar scheme to that of ADMM. It is also noted that the convergence
rate of LADMM [11] and parallel LADMM is O(1/k) [23] whereas the BCD method still lacks of
clear theoretical results of its convergence rate.
0
R? (E)
Theorem 3 Define the operators A and B as A(G) =
, B(E) =
, and
E
?XT GY
M1
let M =
. If ?k is non-decreasing and upper-bounded, ?A > kAk2 , and ?B > kBk2 , then
M2
the sequence {(Ck , Gk , Ek , Mk )} generated by the adaptive LADMM Algorithm 1 converges to a
global minimizer of Eq. (2).
5
Experimental Results
We validated our method in both simulations and the analysis of two real world datasets: MovieLens
(movie rating) and NCI-DREAM (drug discovery) datasets. Three most recent matrix completion
methods that also utilized side information, MAXIDE[28], IMC[13] and DirtyIMC[9], were compared against our method. The design of our experiments focused on demonstrating the effectiveness of our method in practice. The performance of all methods was measured by the relative mean
squared error (RMSE) calculated on missing entries: kR6? (XT GY ? F)k22 /kR6? (F)k22 . For both
synthetic and real-world datasets, we randomly set q percent of the components in each observed
matrix F to be missing. The hyperparameters ??s and the rank of G (required by IMC and DirtyIMC) were tuned via the same cross validation process: we randomly picked 10% of the given entries to
form a validation set. Then models were obtained by applying each method to the remaining entries
with a specific choice of ? from 10?3 , 10?2 , ..., 104 . The average validation RMSE was examined
by repeating the above procedure six times. The hyperparameter values that gave the best average
validation RMSE were chosen for each method. For IMC and DirtyIMC, the best rank of G was
chosen from = 1 to 15 within each data split. For each choice of q, we repeated the above entire
procedure six times and reported the average RMSE on the missing entries.
5.1 Synthetic Datasets
We created two different simulation tests with and without full row rank X and Y. For all the
synthetic datasets, we first randomly created X and Y. In order to make our simulations reminiscent
real situations where distributions of side features can be heterogeneous, data for each feature in both
X and Y were generated according to a distribution that was randomly selected from Gaussian,
Poisson and Gamma distributions. We created the sparse G matrices as follows. The location of
the non-zero entries of G were randomly picked but their values were generated by multiplying a
value drawn from N (0, 100), which we repeated several times to chose the matrices that showed
full or high rank. We then generated F with F = XT GY + N where N represents noise and
each component Ni,j was drawn from N (0, 1). For each simulated F, we ran all methods with
q ? [10% ? 80%] with an increase step of 10%.
We compared the different methods in three settings, which were labeled as synthetic experiment
I, II and III in our results. In the first setting, the dimension of X and Y was set to 15 ? 50 and
6
20 ? 140 and all features in these two matrices were randomly generated to make them full row
rank. Both the last two settings corresponded to the second test where X and Y were not full
row rank. The dimension of X and Y was set to 16 ? 50, 21 ? 140 and 20 ? 50, 25 ? 140,
respectively, for these two settings where the first 15 features in X and 20 features in Y were
randomly created, but the remaining features were generated by arbitrarily linear combinations of
the randomly created features. For all three settings, we used 10 synthetic datasets and reported
mean and standard deviation of RMSE on missing values as shown in Figure 1.
0.8
Synthetic Experiment I
1
Our approach
MAXIDE
IMC
DirtyIMC
0.8
1
Our approach
MAXIDE
IMC
DirtyIMC
0.8
0.4
0.2
0
0.2
0.4
0.6
Missing percentage
0.8
Synthetic Experiment III
Our approach
MAXIDE
IMC
DirtyIMC
0.6
0.6
RMSE
RMSE
0.6
Synthetic Experiment II
RMSE
1
0.4
0.4
0.2
0.2
0
0.2
0.4
0.6
Missing percentage
0.8
0
0.2
0.4
0.6
0.8
Missing percentage
Figure 1: The Comparison of RMSE for Experiments I, II, and III.
Our approach outperformed all other compared methods significantly in almost all these settings.
When the missing rate q increased, the RMSE of our method grew much slower than other methods.
We studied the rank of the recovered G and E in the first setting. For all methods, the corresponding
G and E that gave the best performance were examined. The ranks of G and E from our method,
MAXIDE, IMC, DirtyIMC were 15, 8, 1, 1 and 15, 15, 1, 2, respectively. These results suggested
that incorporating the strong prior of low rank G might hurt the recovery performance. The retrieved
model matrices G of all compared methods (when using q =10% of missing entries in one of the
10 synthetic datasets) together with the true G are plotted in Figure 2. Only our method was able to
recover the true G and all the other methods merely found approximations.
Figure 2: The heatmap of the true G and recovered G matrices in Synthetic Experiment I.
5.2 Real-world Datasets
We used two relatively large datasets that we could find as suitable for our empirical evaluation.
Note that early methods employing side information were often tested on datasets with either X or
Y but not both although some of them might be larger than the two datasets we used.
5.2.1. MovieLens. This dataset was downloaded from [12] and contained 100,000 user ratings
(integers from 1 to 5) from 943 users on 1682 movies. There were 20 movie features such as genre
and release date, as well as 24 user features describing users? demographic information such as age
and gender. We compared all methods with four different q values: 20-50%. The RMSE values of
each method are shown in Table 1, which shows that our approach significantly outperformed other
methods, especially when q was large. Figure 3 shows the constructed G matrix that shows some
interesting observations. For instance, male users tend to rate action, science fiction, thriller and war
movies high but low for children? movies, exhibiting some common intuitions.
5.2.2 NCI-DREAM Challenge. The data on the reactions of 46 breast cancer cell lines to 26 drugs
and the expression data of 18633 genes for all the cell lines were provided by NCI-DREAM Chal7
Methods
Our approach
MAXIDE
IMC
DirtyIMC
20%
0.276
(? 0.001)
0.424
(?0.016)
0.935
(?0.001)
0.705
(?0.001)
MovieLens Data
30%
40%
0.279
0.284
(? 0.002)
(? 0.001)
0.425
0.419
(?0.013)
(?0.008)
0.943
0.945
(?0.001)
(?0.001)
0.738
0.775
(?0.001)
(?0.001)
50%
0.292
(? 0.001)
0.421
(?0.013)
0.959
(?0.001)
0.814
(?0.001)
20%
0.181
(? 0.069)
0.268
(?0.036)
0.437
(?0.031)
0.432
(?0.033)
NCI-Dream Challenge
30%
40%
0.139
0.145
(? 0.010)
(? 0.018)
0.240
0.255
(?0.007)
(?0.016)
0.489
0.557
(?0.003)
(?0.013)
0.475
0.551
(?0.008)
(?0.018)
50%
0.190
(? 0.031)
0.288
(?0.022)
0.637
(?0.011)
0.632
(?0.011)
Table 1: The Comparison of RMSE values of different methods on real-world datasets.
lenge [10]. For each drug, we had 14 features that describes their chemical and physical properties
such as molecular weight, XLogP3 and hydrogen bond donor count, and were downloaded from
National Center for Biotechnology Information (http://pubchem.ncbi.nlm.nih.gov/). For the cell
line features, we ran principle component analysis (PCA) and used the top 45 principal components
that accounted for more than 99.99% of the total data variance. We compared the four different
methods with four different q values: 20-50%. The RMSE values of all methods are provided in
Table 1 where our method again shows the best performance. We examined the ranks of both G and
E obtained by all the methods. They were 15, 15, 1, 1 for G and 2, 15, 1, 2 for E, respectively, for
our approach, MAXIDE, IMC and DirtyIMC in sequence. This demonstrates that a low rank E but
a high rank G give the best performance on this dataset. In other words, requiring a low rank G
may hurt the performance of recovering a low rank E.
The constructed G by our method is plotted in Figure 4, where columns represent cell line features
(i.e., principle components) and rows represent drug features. Please refer to the supplementary material for the names of these features. According to this figure, drug features: XlogP (F2), hydrogen
bond donor (HBD) (F3), Hydrogen bond acceptor (HBA) (F4) and Rotatable Bond number (F5) all
played important roles in drug sensitivity. This result aligns well with biological knowledge, as all
these four features are very important descriptors for cellular entry and retention.
Figure 3: HeatMap of G for MovieLens
6
Figure 4: HeatMap of sign(G) log(|G|) for NCIDREAM for a better illustration
Conclusion
In this paper, we have proposed a novel sparse inductive model that utilizes side features describing
the row and column entities of a partially observed matrix to predict its missing entries. This method
models the linear predictive power of side features as well as interaction between the features of row
and column entities. Theoretical analysis shows that this model has advantages of reduced sample
complexity over classical matrix completion methods, requiring only O(log N ) observed entries to
achieve a perfect recovery of the original matrix when the side features reflect the true latent feature space of the matrix. When the side features are less informative, our model requires O(log N )
observations for an -recovery of the matrix. Unlike early methods that use a BCD algorithm, we
have developed a LADMM algorithm to optimize the proposed formulation. Given the optimization
problem is convex, this algorithm can converge to a global solution. Computational results demonstrate the superior performance of this method over three recent methods. Future work includes
the examination of other types and quality of side information and the understanding of whether our
method will benefit a variety of relevant problems, such as multi-label learning, and semi-supervised
clustering etc.
Acknowledgments
Jinbo Bi and her students Jin Lu, Guannan Liang and Jiangwen Sun were supported by NSF grants
IIS-1320586, DBI-1356655, and CCF-1514357 and NIH R01DA037349.
8
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering: Operator
estimation with spectral regularization. The Journal of Machine Learning Research, 10:803?826, 2009.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM journal on imaging sciences, 2(1):183?202, 2009.
[3] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
J. on Optimization, 20(4):1956?1982, Mar. 2010.
[4] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[5] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. Information
Theory, IEEE Transactions on, 56(5):2053?2080, 2010.
[6] P. Chen and D. Suter. Recovering the missing components in a large noisy low-rank matrix: Application
to sfm. IEEE Trans. Pattern Anal. Mach. Intell., 26(8):1051?1063, Aug. 2004.
[7] T. Chen, W. Zhang, Q. Lu, K. Chen, Z. Zheng, and Y. Yu. Svdfeature: a toolkit for feature-based collaborative filtering. The Journal of Machine Learning Research, 13(1):3619?3622, 2012.
[8] K.-Y. Chiang, C.-J. Hsieh, and E. I. S. Dhillon. Robust principal component analysis with side information. In Proceedings of The 33rd International Conference on Machine Learning, pages 2291?2299,
2016.
[9] K.-Y. Chiang, C.-J. Hsieh, and I. S. Dhillon. Matrix completion with noisy side information. Advances in
Neural Information Processing Systems 28, pages 3429?3437, 2015.
[10] A. Daemen, O. L. Griffith, L. M. Heiser, N. J. Wang, O. M. Enache, Z. Sanborn, F. Pepin, S. Durinck, J. E.
Korkola, M. Griffith, et al. Modeling precision treatment of breast cancer. Genome Biol, 14(10):R110,
2013.
[11] E. X. Fang, B. He, H. Liu, and X. Yuan. Generalized alternating direction method of multipliers: new
theoretical insights and applications. Mathematical Programming Computation, 7(2):149?187, 2015.
[12] F. M. Harper and J. A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell.
Syst., 5(4):19:1?19:19, Dec. 2015.
[13] P. Jain and I. S. Dhillon. Provable inductive matrix completion. arXiv preprint arXiv:1306.0626, 2013.
[14] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information Theory, IEEE
Transactions on, 56(6):2980?2998, June 2010.
[15] Z. Lin, M. Chen, and Y. Ma. The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices. Mathematical Programming, 2010.
[16] G. Liu and P. Li. Low-rank matrix completion in the presence of high coherence. IEEE Transactions on
Signal Processing, 64(21):5623?5633, Nov 2016.
[17] A. K. Menon, K.-P. Chitrapura, S. Garg, D. Agarwal, and N. Kota. Response prediction using collaborative filtering with hierarchies and side-information. In Proceedings of the 17th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 141?149. ACM, 2011.
[18] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene?disease associations.
Bioinformatics, 30(12):i60?i68, 2014.
[19] X. Ning and G. Karypis. Sparse linear methods with side information for top-n recommendations. In
Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys ?12, pages 155?162, New
York, NY, USA, 2012. ACM.
[20] B. Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research,
12:3413?3430, 2011.
[21] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In Proceedings of the 22Nd International Conference on Machine Learning, ICML ?05, pages 713?719,
New York, NY, USA, 2005. ACM.
[22] O. Shamir and S. Shalev-Shwartz. Matrix completion with the trace norm: learning, bounding, and
transducing. The Journal of Machine Learning Research, 15(1):3401?3423, 2014.
[23] W. Shi, Q. Ling, G. Wu, and W. Yin. A proximal gradient algorithm for decentralized composite optimization. IEEE Transactions on Signal Processing, 63(22):6013?6023, Nov 2015.
[24] V. Sindhwani, S. Bucak, J. Hu, and A. Mojsilovic. One-class matrix completion with low-density factorizations. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 1055?1060, Dec
2010.
[25] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. pages 545?560, 2005.
[26] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal
of optimization theory and applications, 109(3):475?494, 2001.
[27] Z. Weng and X. Wang. Low-rank matrix completion for array signal processing. In Acoustics, Speech and
Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 2697?2700. IEEE, 2012.
[28] M. Xu, R. Jin, and Z. hua Zhou. Speedup matrix completion with side information: Application to
multi-label learning. Advances in Neural Information Processing Systems 26, pages 2301?2309, 2013.
[29] J. Yang and X.-M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear
norm minimization. Math. Comput., 82, 2013.
9
| 6265 |@word kgk:2 version:2 norm:16 stronger:2 nd:1 d2:4 hu:1 simulation:4 linearized:4 heiser:1 decomposition:2 hsieh:2 liu:2 kpv:1 tuned:1 outperforms:1 existing:5 kmk:1 recovered:5 jinbo:3 reaction:1 written:1 reminiscent:1 informative:4 update:1 rd2:1 selected:1 ith:1 chiang:2 math:1 location:1 simpler:1 zhang:1 mathematical:2 constructed:2 yuan:2 prove:2 khk:1 ra:1 expected:3 cand:3 multi:2 decreasing:1 gov:1 little:1 considering:1 becomes:1 spain:1 project:1 notation:1 moreover:2 bounded:2 provided:2 minimizes:1 developed:4 shraibman:1 finding:1 ag:1 ought:1 guarantee:7 interactive:13 shed:1 exactly:1 rm:9 k2:2 connecticut:1 control:2 unit:1 demonstrates:1 grant:1 before:1 t1:2 retention:1 limit:1 mach:1 meet:1 might:2 chose:1 garg:1 initialization:1 studied:1 examined:3 suggests:1 factorization:4 bi:3 karypis:1 uy:2 unique:6 acknowledgment:1 practice:3 block:5 differs:1 procedure:2 svdfeature:1 empirical:6 drug:6 attain:1 significantly:3 acceptor:1 vert:1 pre:2 word:2 f2k:2 griffith:2 composite:1 convenience:1 onto:1 operator:7 maxide:7 context:1 applying:1 py:3 optimize:1 lagrangian:3 missing:15 yt:7 demonstrated:1 center:1 shi:1 convex:9 focused:1 ke:1 shen:1 recovery:33 m2:7 insight:1 array:1 dbi:1 nuclear:8 orthonormal:4 spanned:1 fang:1 oh:1 d1:2 classic:3 handle:1 coordinate:3 hurt:2 limiting:1 pt:4 hierarchy:1 shamir:1 user:14 exact:11 programming:2 us:1 element:1 natarajan:1 utilized:1 predicts:1 labeled:1 yd2:1 observed:16 role:2 subproblem:3 donor:2 preprint:1 solved:3 wang:2 worst:1 jiangwen:3 sun:3 decrease:1 ran:2 disease:1 intuition:2 complexity:15 mine:1 solving:2 predictive:4 efficiency:1 f2:1 basis:2 icassp:1 genre:1 regularizer:4 derivation:1 uconn:1 m02:2 describe:1 fast:2 jain:1 kgk1:5 corresponded:1 shalev:1 abernethy:1 requiring:2 whose:1 supplementary:3 solve:6 larger:1 rennie:1 relax:1 otherwise:1 compressed:1 noisy:4 kxt:2 sequence:2 advantage:1 cai:1 propose:3 interaction:7 product:4 relevant:2 date:1 flexibility:1 achieve:4 frobenius:1 convergence:11 requirement:3 vtx:1 perfect:6 converges:2 help:1 depending:2 develop:1 completion:24 derive:1 measured:1 aug:1 eq:4 strong:2 recovering:4 predicted:1 implies:1 quantify:1 exhibiting:1 direction:3 ning:1 f4:1 vx:2 sgn:1 nlm:1 material:3 require:6 argued:1 f1:2 investigation:1 biological:1 strictly:2 mm:1 practically:1 sufficiently:1 mapping:1 predict:1 optimizer:1 early:4 smallest:1 estimation:1 outperformed:2 applicable:2 bond:4 label:2 tool:1 minimization:3 gaussian:3 aim:1 ekf:1 ck:13 zhou:1 ej:1 shrinkage:1 derived:1 kid:1 june:1 focus:2 validated:1 properly:1 release:1 rank:68 indicates:1 mainly:1 sigkdd:1 cg:1 helpful:2 entire:2 typically:1 her:1 expand:1 tao:1 arg:3 among:1 proposes:1 heatmap:3 art:1 initialize:2 equal:1 construct:2 f3:1 evgeniou:1 sampling:10 cartoon:2 represents:1 vvt:1 yu:1 k2f:1 icml:1 discrepancy:1 future:1 others:1 few:4 suter:1 randomly:9 gamma:1 national:1 intell:2 beck:1 ab:1 mining:2 zheng:1 evaluation:2 male:2 weng:1 light:1 necessary:3 xy:3 respective:2 plotted:2 e0:14 theoretical:9 minimal:1 mk:4 increased:1 column:9 soft:2 instance:3 teboulle:1 modeling:1 introducing:1 deviation:1 entry:23 usefulness:1 characterize:1 reported:2 corrupted:7 sv:3 proximal:1 synthetic:10 recht:2 density:1 international:5 sensitivity:1 siam:2 together:2 squared:2 uty:1 satisfied:1 again:1 containing:1 reflect:1 classically:1 f5:1 ek:17 return:1 li:1 syst:1 account:1 gy:17 summarized:1 coding:1 includes:1 student:1 satisfy:2 explicitly:1 picked:2 closed:3 analyze:1 start:1 recover:7 maintains:2 parallel:1 utx:1 kgk2:1 rmse:13 mpy:3 contribution:1 collaborative:4 ni:1 kek:8 descriptor:2 efficiently:2 variance:1 yield:1 lu:4 multiplying:1 confirmed:1 bk1:1 history:1 strongest:1 parallelizable:1 aligns:1 definition:2 sixth:1 against:1 f3k:2 proof:2 mi:1 sampled:1 proved:1 dataset:2 treatment:1 knowledge:2 attained:1 higher:3 supervised:1 follow:1 response:1 formulation:8 though:1 mar:1 generality:1 until:1 ykf:1 ei:2 keshavan:1 lack:1 chitrapura:1 rab:1 quality:2 menon:1 name:1 usa:2 k22:4 contain:3 true:5 multiplier:5 ccf:1 inductive:7 hence:1 equality:1 chemical:1 alternating:5 q0:4 regularization:1 dhillon:4 during:1 please:1 noted:1 generalized:1 f1k:1 presenting:1 complete:1 demonstrate:1 percent:1 meaning:1 wise:3 novel:2 nih:2 common:1 superior:1 empirically:1 physical:1 discussed:2 he:1 m1:5 association:1 refer:2 measurement:1 imc:9 imposing:1 vec:5 tuning:3 rd:1 mathematics:1 similarly:1 had:1 toolkit:1 access:1 etc:1 add:2 pu:5 recent:6 female:1 retrieved:2 showed:1 certain:3 arbitrarily:3 life:1 vt:2 impose:2 determine:1 converge:1 signal:5 ii:5 semi:1 full:19 infer:1 reduces:1 smooth:1 characterized:1 cross:1 long:1 bach:1 lin:1 icdm:1 molecular:1 prediction:4 mk1:6 heterogeneous:1 vision:1 breast:2 poisson:1 arxiv:2 iteration:2 sometimes:1 represent:2 agarwal:1 uvt:1 cell:4 dec:2 whereas:1 nci:4 singular:7 source:1 unlike:2 subject:4 tend:4 effectiveness:1 integer:1 near:1 presence:1 yang:1 iii:4 enough:2 easy:1 split:1 iterate:1 affect:1 variety:1 gave:2 reduce:1 imperfect:1 t0:2 whether:2 six:2 war:1 expression:1 utility:1 pca:1 penalty:1 speech:1 biotechnology:1 york:2 action:2 useful:1 clear:1 repeating:1 reduced:4 http:1 percentage:3 vy:2 fiction:2 nsf:1 sign:1 estimated:1 hyperparameter:1 group:1 four:4 demonstrating:2 threshold:1 achieving:1 drawn:3 utilize:1 imaging:1 relaxation:1 merely:1 convert:1 inverse:2 mk2:10 throughout:1 almost:1 wu:1 utilizes:2 coherence:3 sfm:1 bound:4 ct:1 completing:2 played:1 constraint:3 precisely:1 kronecker:1 bcd:4 hy:2 kota:1 aspect:1 span:3 min:7 separable:1 dirtyimc:9 px:6 relatively:1 speedup:1 according:2 combination:1 smaller:1 describes:2 vty:1 s1:2 describing:2 count:1 needed:1 demographic:1 decentralized:1 observe:1 spectral:1 reshape:2 m01:2 slower:1 xkf:1 original:5 top:2 remaining:2 include:1 ensure:1 completed:2 clustering:1 log2:1 ncbi:1 restrictive:2 prof:2 build:3 especially:2 classical:1 objective:3 g0:10 kak2:1 interacts:1 enhances:1 gradient:1 sanborn:1 subspace:1 simulated:1 entity:6 recsys:1 nondifferentiable:1 cellular:1 tseng:1 enforcing:2 dream:4 provable:1 degenerated:1 besides:3 useless:1 index:1 insufficient:1 illustration:1 balance:2 minimizing:2 liang:3 difficult:2 i60:1 subproblems:4 gk:13 trace:3 kpu:1 implementation:1 design:1 anal:1 bucak:1 recommender:3 upper:2 observation:6 datasets:14 rotatable:1 jin:4 descent:3 defining:1 situation:1 grew:1 rn:1 perturbation:1 i68:1 arbitrary:1 inferred:1 rating:7 required:5 specified:2 optimized:1 acoustic:1 quadratically:1 established:2 barcelona:1 nip:1 trans:2 impair:1 suggested:1 able:1 usually:1 pattern:1 thriller:1 sparsity:2 challenge:2 max:19 power:3 suitable:1 natural:1 examination:1 regularized:1 predicting:2 residual:3 advanced:1 mn:1 transducing:1 scheme:1 movie:10 xd1:1 created:6 prior:6 understanding:1 discovery:3 relative:1 loss:2 fully:2 interesting:1 filtering:3 lenge:1 srebro:2 age:2 validation:4 foundation:1 downloaded:2 sufficient:7 thresholding:5 principle:2 row:20 ck2f:1 cancer:2 accounted:1 repeat:1 last:1 supported:1 side:48 understand:1 focussed:1 correspondingly:1 sparse:12 benefit:3 calculated:1 dimension:2 world:5 genome:1 computes:1 commonly:2 adaptive:10 simplified:1 far:1 employing:1 transaction:4 approximate:4 nov:2 gene:2 global:4 assumed:2 shwartz:1 search:1 latent:9 iterative:4 hydrogen:3 table:3 robust:1 interact:1 poly:1 constructing:1 main:1 montanari:1 s2:2 noise:2 hyperparameters:2 bounding:1 ling:1 repeated:2 child:1 xu:1 augmented:5 referred:1 ny:2 precision:1 quasiconvex:1 pv:3 explicit:1 konstan:1 comput:1 third:1 theorem:9 formula:1 xt:27 specific:2 uut:1 mojsilovic:1 consist:1 incorporating:1 effectively:1 kr:1 linearization:1 kx:1 margin:1 chen:4 easier:1 rd1:2 kbk2:1 yin:1 simply:2 explore:1 likely:1 lagrange:2 contained:1 ux:2 partially:4 recommendation:1 sindhwani:1 hua:1 gender:1 minimizer:1 determines:1 satisfies:2 acm:6 ma:1 hm1:1 careful:1 replace:1 admm:4 content:1 included:1 determined:1 movielens:5 uniformly:1 principal:2 total:1 hd1:1 experimental:1 e:3 harper:1 bioinformatics:1 incorporate:2 tested:1 biol:1 ek2f:2 |
5,820 | 6,266 | Ancestral Causal Inference
Sara Magliacane
VU Amsterdam & University of Amsterdam
[email protected]
Tom Claassen
Radboud University Nijmegen
[email protected]
Joris M. Mooij
University of Amsterdam
[email protected]
Abstract
Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches
to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing
approaches can still be greatly improved in terms of accuracy and scalability. We
present a novel method that reduces the combinatorial explosion of the search space
by using a more coarse-grained representation of causal information, drastically
reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one
to easily combine observational and interventional data and to incorporate various
types of available background knowledge. We prove soundness and asymptotic
consistency of our method and demonstrate that it can outperform the state-ofthe-art on synthetic data, achieving a speedup of several orders of magnitude. We
illustrate its practical feasibility by applying it to a challenging protein data set.
1
Introduction
Discovering causal relations from data is at the foundation of the scientific method. Traditionally,
cause-effect relations have been recovered from experimental data in which the variable of interest is
perturbed, but seminal work like the do-calculus [16] and the PC/FCI algorithms [23, 26] demonstrate
that, under certain assumptions (e.g., the well-known Causal Markov and Faithfulness assumptions
[23]), it is already possible to obtain substantial causal information by using only observational data.
Recently, there have been several proposals for combining observational and experimental data to
discover causal relations. These causal discovery methods are usually divided into two categories:
constraint-based and score-based methods. Score-based methods typically evaluate models using a
penalized likelihood score, while constraint-based methods use statistical independences to express
constraints over possible causal models. The advantages of constraint-based over score-based methods
are the ability to handle latent confounders and selection bias naturally, and that there is no need
for parametric modeling assumptions. Additionally, constraint-based methods expressed in logic
[2, 3, 25, 8] allow for an easy integration of background knowledge, which is not trivial even for
simple cases in approaches that are not based on logic [1].
Two major disadvantages of traditional constraint-based methods are: (i) vulnerability to errors
in statistical independence test results, which are quite common in real-world applications, (ii) no
ranking or estimation of the confidence in the causal predictions. Several approaches address the
first issue and improve the reliability of constraint-based methods by exploiting redundancy in the
independence information [3, 8, 25]. The idea is to assign weights to the input statements that reflect
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
their reliability, and then use a reasoning scheme that takes these weights into account. Several
weighting schemes can be defined, from simple ways to attach weights to single independence
statements [8], to more complicated schemes to obtain weights for combinations of independence
statements [25, 3]. Unfortunately, these approaches have to sacrifice either accuracy by using a greedy
method [3, 25], or scalability by formulating a discrete optimization problem on a super-exponentially
large search space [8]. Additionally, the confidence estimation issue is addressed only in limited
cases [17].
We propose Ancestral Causal Inference (ACI), a logic-based method that provides comparable
accuracy to the best state-of-the-art constraint-based methods (e.g., [8]) for causal systems with
latent variables without feedback, but improves on their scalability by using a more coarse-grained
representation of causal information. Instead of representing all possible direct causal relations, in
ACI we represent and reason only with ancestral relations (?indirect? causal relations), developing
specialised ancestral reasoning rules. This representation, though still super-exponentially large,
drastically reduces computation time. Moreover, it turns out to be very convenient, because in
real-world applications the distinction between direct causal relations and ancestral relations is not
always clear or necessary. Given the estimated ancestral relations, the estimation can be refined to
direct causal relations by constraining standard methods to a smaller search space, if necessary.
Furthermore, we propose a method to score predictions according to their confidence. The confidence
score can be thought of as an approximation to the marginal probability of an ancestral relation.
Scoring predictions enables one to rank them according to their reliability, allowing for higher
accuracy. This is very important for practical applications, as the low reliability of the predictions of
constraint-based methods has been a major impediment to their wide-spread use.
We prove soundness and asymptotic consistency under mild conditions on the statistical tests for ACI
and our scoring method. We show that ACI outperforms standard methods, like bootstrapped FCI
and CFCI, in terms of accuracy, and achieves a speedup of several orders of magnitude over [8] on a
synthetic dataset. We illustrate its practical feasibility by applying it to a challenging protein data set
[21] that so far had only been addressed with score-based methods and observe that it successfully
recovers from faithfulness violations. In this context, we showcase the flexibility of logic-based
approaches by introducing weighted ancestral relation constraints that we obtain from a combination
of observational and interventional data, and show that they substantially increase the reliability of
the predictions. Finally, we provide an open-source version of our algorithms and the evaluation
framework, which can be easily extended, at http://github.com/caus-am/aci.
2
Preliminaries and related work
Preliminaries We assume that the data generating process can be modeled by a causal Directed
Acyclic Graph (DAG) that may contain latent variables. For simplicity we also assume that there is
no selection bias. Finally, we assume that the Causal Markov Assumption and the Causal Faithfulness
Assumption [23] both hold. In other words, the conditional independences in the observational
distribution correspond one-to-one with the d-separations in the causal DAG. Throughout the paper
we represent variables with uppercase letters, while sets of variables are denoted by boldface. All
proofs are provided in the Supplementary Material.
A directed edge X ? Y in the causal DAG represents a direct causal relation between cause X on
effect Y . Intuitively, in this framework this indicates that manipulating X will produce a change in
Y , while manipulating Y will have no effect on X. A more detailed discussion can be found in [23].
A sequence of directed edges X1 ? X2 ? ? ? ? ? Xn is a directed path. If there exists a directed
path from X to Y (or X = Y ), then X is an ancestor of Y (denoted as X 99K Y ). Otherwise, X is
not an ancestor of Y (denoted as X 699K Y ). For a set of variables W , we write:
X 99K W := ?Y ? W : X 99K Y,
(1)
X 699K W := ?Y ? W : X 699K Y.
We define an ancestral structure as any non-strict partial order on the observed variables of the DAG,
i.e., any relation that satisfies the following axioms:
(reflexivity) : X 99K X,
(2)
(transitivity) : X 99K Y ? Y 99K Z =? X 99K Z,
(3)
(antisymmetry) : X 99K Y ? Y 99K X =? X = Y.
(4)
2
The underlying causal DAG induces a unique ?true? ancestral structure, which represents the transitive
closure of the direct causal relations projected on the observed variables.
For disjoint sets X, Y , W we denote conditional independence of X and Y given W as X ?
?
Y | W , and conditional dependence as X 6?
? Y | W . We call the cardinality |W | the order of the
conditional (in)dependence relation. Following [2] we define a minimal conditional independence by:
X?
? Y | W ? [Z] := (X ?
? Y | W ? Z) ? (X 6?
? Y | W ),
and similarly, a minimal conditional dependence by:
X 6?
? Y | W ? [Z] := (X 6?
? Y | W ? Z) ? (X ?
? Y | W ).
The square brackets indicate that Z is needed for the (in)dependence to hold in the context of W . Note
that the negation of a minimal conditional independence is not a minimal conditional dependence.
Minimal conditional (in)dependences are closely related to ancestral relations, as pointed out in [2]:
Lemma 1. For disjoint (sets of) variables X, Y, Z, W :
X?
? Y | W ? [Z] =? Z 99K ({X, Y } ? W ),
X 6?
? Y | W ? [Z] =? Z 699K ({X, Y } ? W ).
(5)
(6)
Exploiting these rules (as well as others that will be introduced in Section 3) to deduce ancestral
relations directly from (in)dependences is key to the greatly improved scalability of our method.
Related work on conflict resolution One of the earliest algorithms to deal with conflicting inputs
in constraint-based causal discovery is Conservative PC [18], which adds ?redundant? checks to the
PC algorithm that allow it to detect inconsistencies in the inputs, and then makes only predictions that
do not rely on the ambiguous inputs. The same idea can be applied to FCI, yielding Conservative FCI
(CFCI) [4, 10]. BCCD (Bayesian Constraint-based Causal Discovery) [3] uses Bayesian confidence
estimates to process information in decreasing order of reliability, discarding contradictory inputs as
they arise. COmbINE (Causal discovery from Overlapping INtErventions) [25] is an algorithm that
combines the output of FCI on several overlapping observational and experimental datasets into a
single causal model by first pooling and recalibrating the independence test p-values, and then adding
each constraint incrementally in order of reliability to a SAT instance. Any constraint that makes the
problem unsatisfiable is discarded.
Our approach is inspired by a method presented by Hyttinen, Eberhardt and J?rvisalo [8] (that
we will refer to as HEJ in this paper), in which causal discovery is formulated as a constrained
discrete minimization problem. Given a list of weighted independence statements, HEJ searches
for the optimal causal graph G (an acyclic directed mixed graph, or ADMG) that minimizes the
sum of the weights of the independence statements that are violated according to G. In order to
test whether a causal graph G induces a certain independence, the method creates an encoding DAG
of d-connection graphs. D-connection graphs are graphs that can be obtained from a causal graph
through a series of operations (conditioning, marginalization and interventions). An encoding DAG
of d-connection graphs is a complex structure encoding all possible d-connection graphs and the
sequence of operations that generated them from a given causal graph. This approach has been shown
to correct errors in the inputs, but is computationally demanding because of the huge search space.
3
ACI: Ancestral Causal Inference
We propose Ancestral Causal Inference (ACI), a causal discovery method that accurately reconstructs
ancestral structures, also in the presence of latent variables and statistical errors. ACI builds on HEJ
[8], but rather than optimizing over encoding DAGs, ACI optimizes over the much simpler (but still
very expressive) ancestral structures.
For n variables, the number of possible ancestral structures is the number of partial orders (http:
2
2
//oeis.org/A001035), which grows as 2n /4+o(n ) [11], while the number of DAGs can be
computed with a well-known super-exponential recurrence formula (http://oeis.org/A003024).
The number of ADMGs is | DAG(n)| ? 2n(n?1)/2 . Although still super-exponential, the number of
ancestral structures grows asymptotically much slower than the number of DAGs and even more so,
ADMGs. For example, for 7 variables, there are 6 ? 106 ancestral structures but already 2.3 ? 1015
ADMGs, which lower bound the number of encoding DAGs of d-connection graphs used by HEJ.
3
New rules The rules in HEJ explicitly encode marginalization and conditioning operations on
d-connection graphs, so they cannot be easily adapted to work directly with ancestral relations.
Instead, ACI encodes the ancestral reasoning rules (2)?(6) and five novel causal reasoning rules:
Lemma 2. For disjoint (sets) of variables X, Y, U, Z, W :
(X ?
? Y | Z) ? (X 699K Z) =? X 699K Y,
X 6?
? Y | W ? [Z] =? X 6?
? Z | W,
X?
? Y | W ? [Z] =? X 6?
? Z | W,
(X ?
? Y | W ? [Z]) ? (X ?
? Z | W ? U ) =? (X ?
? Y | W ? U ),
(Z 6?
? X | W ) ? (Z 6?
? Y | W ) ? (X ?
? Y | W ) =? X 6?
? Y | W ? Z.
(7)
(8)
(9)
(10)
(11)
We prove the soundness of the rules in the Supplementary Material. We elaborate some conjectures
about their completeness in the discussion after Theorem 1 in the next Section.
Optimization of loss function We formulate causal discovery as an optimization problem where
a loss function is optimized over possible causal structures. Intuitively, the loss function sums the
weights of all the inputs that are violated in a candidate causal structure.
Given a list I of weighted input statements (ij , wj ), where ij is the input statement and wj is the
associated weight, we define the loss function as the sum of the weights of the input statements that
are not satisfied in a given possible structure W ? W, where W denotes the set of all possible causal
structures. Causal discovery is formulated as a discrete optimization problem:
W ? = arg min L(W ; I),
(12)
W ?W
X
L(W ; I) :=
wj ,
(13)
(ij ,wj )?I: W ?R|=?ij
where W ? R |= ?ij means that input ij is not satisfied in structure W according to the rules R.
This general formulation includes both HEJ and ACI, which differ in the types of possible structures
W and the rules R. In HEJ W represents all possible causal graphs (specifically, acyclic directed
mixed graphs, or ADMGs, in the acyclic case) and R are operations on d-connection graphs. In ACI
W represent ancestral structures (defined with the rules(2)-(4)) and the rules R are rules (5)?(11).
Constrained optimization in ASP The constrained optimization problem in (12) can be implemented using a variety of methods. Given the complexity of the rules, a formulation in an expressive
logical language that supports optimization, e.g., Answer Set Programming (ASP), is very convenient.
ASP is a widely used declarative programming language based on the stable model semantics [12, 7]
that has successfully been applied to several NP-hard problems. For ACI we use the state-of-the-art
ASP solver clingo 4 [6]. We provide the encoding in the Supplementary Material.
Weighting schemes ACI supports two types of input statements: conditional independences and
ancestral relations. These statements can each be assigned a weight that reflects their confidence. We
propose two simple approaches with the desirable properties of making ACI asymptotically consistent
under mild assumptions (as described in the end of this Section), and assigning a much smaller weight
to independences than to dependences (which agrees with the intuition that one is confident about a
measured strong dependence, but not about independence vs. weak dependence). The approaches are:
? a frequentist approach, in which for any appropriate frequentist statistical test with independence as null hypothesis (resp. a non-ancestral relation), we define the weight:
w = | log p ? log ?|, where p = p-value of the test, ? = significance level (e.g., 5%);
(14)
? a Bayesian approach, in which the weight of each input statement i using data set D is:
w = log
p(i|D)
p(D|i) p(i)
= log
,
p(?i|D)
p(D|?i) p(?i)
where the prior probability p(i) can be used as a tuning parameter.
4
(15)
Given observational and interventional data, in which each intervention has a single known target (in
particular, it is not a fat-hand intervention [5]), a simple way to obtain a weighted ancestral statement
X 99K Y is with a two-sample test that tests whether the distribution of Y changes with respect to
its observational distribution when intervening on X. This approach conveniently applies to various
types of interventions: perfect interventions [16], soft interventions [14], mechanism changes [24],
and activity interventions [15]. The two-sample test can also be implemented as an independence test
that tests for the independence of Y and IX , the indicator variable that has value 0 for observational
samples and 1 for samples from the interventional distribution in which X has been intervened upon.
4
Scoring causal predictions
The constrained minimization in (12) may produce several optimal solutions, because the underlying
structure may not be identifiable from the inputs. To address this issue, we propose to use the loss
function (13) and score the confidence of a feature f (e.g., an ancestral relation X 99K Y ) as:
C(f ) = min L(W ; I ? {(?f, ?)}) ? min L(W ; I ? {(f, ?)}).
(16)
W ?W
W ?W
Without going into details here, we note that the confidence (16) can be interpreted as a MAP
approximation of the log-odds ratio of the probability that feature f is true in a Markov Logic model:
P
e?L(W ;I) 1W ?R|=f
P(f | I, R)
maxW ?W e?L(W ;I?{(f,?)})
= P W ?W ?L(W ;I)
= eC(f ) .
?
?L(W ;I?{(?f,?)})
P(?f | I, R)
max
e
e
1
W
?W
W
?R|=?f
W ?W
In this paper, we usually consider the features f to be ancestral relations, but the idea is more generally
applicable. For example, combined with HEJ it can be used to score direct causal relations.
Soundness and completeness Our scoring method is sound for oracle inputs:
Theorem 1. Let R be sound (not necessarily complete) causal reasoning rules. For any feature f ,
the confidence score C(f ) of (16) is sound for oracle inputs with infinite weights.
Here, soundness means that C(f ) = ? if f is identifiable from the inputs, C(f ) = ?? if ?f
is identifiable from the inputs, and C(f ) = 0 otherwise (neither are identifiable). As features, we
can consider for example ancestral relations f = X 99K Y for variables X, Y . We conjecture that
the rules (2)?(11) are ?order-1-complete?, i.e., they allow one to deduce all (non)ancestral relations
that are identifiable from oracle conditional independences of order ? 1 in observational data. For
higher-order inputs additional rules can be derived. However, our primary interest in this work is
improving computation time and accuracy, and we are willing to sacrifice completeness. A more
detailed study of the completeness properties is left as future work.
Asymptotic consistency Denote the number of samples by N . For the frequentist weights in (14),
we assume that the statistical tests are consistent in the following sense:
?? H1
P
log pN ? log ?N ?
(17)
+? H0 ,
as N ? ?, where the null hypothesis H0 is independence/nonancestral relation and the alternative
hypothesis H1 is dependence/ancestral relation. Note that we need to choose a sample-size dependent
threshold ?N such that ?N ? 0 at a suitable rate. Kalisch and B?hlmann [9] show how this can be
done for partial correlation tests under the assumption that the distribution is multivariate Gaussian.
For the Bayesian weighting scheme in (15), we assume that for N ? ?,
?? if i is true
P
wN ?
(18)
+? if i is false.
This will hold (as long as there is no model misspecification) under mild technical conditions for
finite-dimensional exponential family models. In both cases, the probability of a type I or type II
error will converge to 0, and in addition, the corresponding weight will converge to ?.
Theorem 2. Let R be sound (not necessarily complete) causal reasoning rules. For any feature f ,
the confidence score C(f ) of (16) is asymptotically consistent under assumption (17) or (18).
Here, ?asymptotically consistent? means that the confidence score C(f ) ? ? in probability if f is
identifiably true, C(f ) ? ?? in probability if f is identifiably false, and C(f ) ? 0 in probability
otherwise.
5
BACFCI
12.51
16.36
15.12
21.71
28.51
(a)
10000
1000
100
10
HEJ
ACI
1
0.1
1
101
201
301
401
501
601
701
801
901
1001
1101
1201
1301
1401
1501
1601
1701
1801
1901
c
1
4
1
1
1
Execution time (s)
n
6
6
7
8
9
Average execution time (s)
ACI
HEJ
BAFCI
0.21
12.09
8.39
1.66
432.67
11.10
1.03
715.74
9.37
9.74
? 2500 13.71
146.66 2500 18.28
Instances (sorted by solution time)
(b)
Figure 1: Execution time comparison on synthetic data for the frequentist test on 2000 synthetic
models: (a) average execution time for different combinations of number of variables n and max.
order c; (b) detailed plot of execution times for n = 7, c = 1 (logarithmic scale).
5
Evaluation
In this section we report evaluations on synthetically generated data and an application on a real
dataset. Crucially, in causal discovery precision is often more important than recall. In many realworld applications, discovering a few high-confidence causal relations is more useful than finding
every possible causal relation, as reflected in recently proposed algorithms, e.g., [17].
Compared methods We compare the predictions of ACI and of the acyclic causally insufficient
version of HEJ [8], when used in combination with our scoring method (16). We also evaluate two
standard methods: Anytime FCI [22, 26] and Anytime CFCI [4], as implemented in the pcalg R
package [10]. We use the anytime versions of (C)FCI because they allow for independence test
results up to a certain order. We obtain the ancestral relations from the output PAG using Theorem
3.1 from [20]. (Anytime) FCI and CFCI do not rank their predictions, but only predict the type of
relation: ancestral (which we convert to +1), non-ancestral (-1) and unknown (0). To get a scoring of
the predictions, we also compare with bootstrapped versions of Anytime FCI and Anytime CFCI.
We perform the bootstrap by repeating the following procedure 100 times: sample randomly half
of the data, perform the independence tests, run Anytime (C)FCI. From the 100 output PAGs we
extract the ancestral predictions and average them. We refer to these methods as BA(C)FCI. For a
fair comparison, we use the same independence tests and thresholds for all methods.
Synthetic data We simulate the data using the simulator from HEJ [8]: for each experimental
condition (e.g., a given number of variables n and order c), we generate randomly M linear acyclic
models with latent variables and Gaussian noise and sample N = 500 data points. We then perform
independence tests up to order c and weight the (in)dependence statements using the weighting
schemes described in Section 3. For the frequentist weights we use tests based on partial correlations
and Fisher?s z-transform to obtain approximate p-values (see, e.g., [9]) with significance level
? = 0.05. For the Bayesian weights, we use the Bayesian test for conditional independence presented
in [13] as implemented by HEJ with a prior probability of 0.1 for independence.
In Figure 1(a) we show the average execution times on a single core of a 2.80GHz CPU for different
combinations of n and c, while in Figure 1(b) we show the execution times for n = 7, c = 1, sorting
the execution times in ascending order. For 7 variables ACI is almost 3 orders of magnitude faster
than HEJ, and the difference grows exponentially as n increases. For 8 variables HEJ can complete
only four of the first 40 simulated models before the timeout of 2500s. For reference we add the
execution time for bootstrapped anytime FCI and CFCI.
In Figure 2 we show the accuracy of the predictions with precision-recall (PR) curves for both
ancestral (X 99K Y ) and nonancestral (X 699K Y ) relations, in different settings. In this Figure, for
ACI and HEJ all of the results are computed using frequentist weights and, as in all evaluations, our
scoring method (16). While for these two methods we use c = 1, for (bootstrapped) (C)FCI we use
all possible independence test results (c = n ? 2). In this case, the anytime versions of FCI and CFCI
are equivalent to the standard versions of FCI and CFCI. Since the overall results are similar, we
report the results with the Bayesian weights in the Supplementary Material.
In the first row of Figure 2, we show the setting with n = 6 variables. The performances of HEJ
and ACI coincide, performing significantly better for nonancestral predictions and the top ancestral
6
1
Bootstrapped (100) CFCI
Bootstrapped (100) FCI
HEJ (c=1)
ACI (c=1)
Standard CFCI
Standard FCI
0.7
1
1
0.95
0.98
0.9
0.96
0.85
0.6
Precision
Precision
0.8
Precision
0.9
0.8
0.75
0.5
0.94
Bootstrapped (100) CFCI
Bootstrapped (100) FCI
HEJ (c=1)
ACI (c=1)
Standard CFCI
Standard FCI
0.92
0.9
0.7
0.4
0.88
0.65
0.3
0.6
0
0.05
0.1
0.15
0.2
0.86
0
0.005
Recall
(a) PR ancestral: n=6
0.015
0.02
0
0.98
Precision
0.97
0.7
0.6
0.5
0.4
0.4
0.96
0.95
Bootstrapped (100) CFCI
Bootstrapped (100) FCI
ACI (c=1)
ACI (c=1, i=1)
Standard CFCI
Standard FCI
0.94
0.93
0.92
0.91
0.3
0.3
0.1
0.15
Recall
(d) PR ancestral: n=8
1
1
0.8
0.5
0.05
0.8
0.99
0.6
0
0.6
0.9
Precision
0.7
0.4
(c) PR nonancestral: n=6
1
Bootstrapped (100) CFCI
Bootstrapped (100) FCI
ACI (c=1)
ACI (c=1, i=1)
Standard CFCI
Standard FCI
0.8
0.2
Recall
(b) PR ancestral: n=6 (zoom)
1
0.9
Precision
0.01
Recall
0.2
0.9
0
0.005
0.01
0.015
Recall
(e) PR ancestral: n=8 (zoom)
0.02
0
0.2
0.4
0.6
0.8
1
Recall
(f) PR nonancestral: n=8
Figure 2: Accuracy on synthetic data for the two prediction tasks (ancestral and nonancestral relations)
using the frequentist test with ? = 0.05. The left column shows the precision-recall curve for ancestral
predictions, the middle column shows a zoomed-in version in the interval (0,0.02), while the right
column shows the nonancestral predictions.
predictions (see zoomed-in version in Figure 2(b)). This is remarkable, as HEJ and ACI use only
independence test results up to order c = 1, in contrast with (C)FCI which uses independence test
results of all orders. Interestingly, the two discrete optimization algorithms do not seem to benefit
much from higher order independence tests, thus we omit them from the plots (although we add the
graphs in the Supplementary Material). Instead, bootstrapping traditional methods, oblivious to the
(in)dependence weights, seems to produce surprisingly good results. Nevertheless, both ACI and HEJ
outperform bootstrapped FCI and CFCI, suggesting these methods achieve nontrivial error-correction.
In the second row of Figure 2, we show the setting with 8 variables. In this setting HEJ is too slow. In
addition to the previous plot, we plot the accuracy of ACI when there is oracle background knowledge
on the descendants of one variable (i = 1). This setting simulates the effect of using interventional
data, and we can see that the performance of ACI improves significantly, especially in the ancestral
preditions. The performance of (bootstrapped) FCI and CFCI is limited by the fact that they cannot
take advantage of this background knowledge, except with complicated postprocessing [1].
Application on real data We consider the challenging task of reconstructing a signalling network
from flow cytometry data [21] under different experimental conditions. Here we consider one
experimental condition as the observational setting and seven others as interventional settings. More
details and more evaluations are reported in the Supplementary Material. In contrast to likelihoodbased approaches like [21, 5, 15, 19], in our approach we do not need to model the interventions
quantitatively. We only need to know the intervention targets, while the intervention types do not
matter. Another advantage of our approach is that it takes into account possible latent variables.
We use a t-test to test for each intervention and for each variable whether its distribution changes
with respect to the observational condition. We use the p-values of these tests as in (14) in order to
obtain weighted ancestral relations that are used as input (with threshold ? = 0.05). For example, if
adding U0126 (a MEK inhibitor) changes the distribution of RAF significantly with respect to the
observational baseline, we get a weighted ancestral relation MEK99KRAF. In addition, we use partial
correlations up to order 1 (tested in the observational data only) to obtain weighted independences
used as input. We use ACI with (16) to score the ancestral relations for each ordered pair of variables.
The main results are illustrated in Figure 3, where we compare ACI with bootstrapped anytime CFCI
7
Weighted causes(i,j)
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
0
?500
JNK
PIP2
JNK
p38
p38
PKC
PKC
PKA
Akt
ErkPKA
PIP3
Akt
PIP2
PLCg
Erk
Mek
Raf
PIP3
PLCg
FCI
?1000
Raf independences
(c) ACI (input:
Mek
of order ?PLCg
1,
weighted ancesPIP2
PIP3
tral relations)
Erk
Akt
PKA
CFCI
1000
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
Notably, our algorithms can correctly recover from faithfulness violations (e.g., the independence
between MEK and ERK), because they take into account the weight of the input statements (the weight
of the independence is considerably smaller than that of the ancestral relation, which corresponds
with a quite significant change in distribution). In contrast, methods that start by reconstructing the
skeleton, like (anytime) (C)FCI, would decide that MEK and ERK are nonadjacent, and are unable to
recover from that erroneous decision. This illustrates another advantage of our approach.
Discussion and conclusions
As we have shown, ancestral structures are very well-suited for causal discovery. They offer a
natural way to incorporate background causal knowledge, e.g., from experimental data, and allow a
huge computational advantage over existing representations for error-correcting algorithms, such as
[8]. When needed, ancestral structures can be mapped to a finer-grained representation with direct
causal relations, as we sketch in the Supplementary Material. Furthermore, confidence estimates on
causal predictions are extremely helpful in practice, and can significantly boost the reliability of the
output. Although standard methods, like bootstrapping (C)FCI, already provide reasonable estimates,
methods that take into account the confidence in the inputs, as the one presented here, can lead to
further improvements of the reliability of causal relations inferred from data.
Strangely (or fortunately) enough, neither of the optimization methods seems to improve much with
higher order independence test results. We conjecture that this may happen because our loss function
essentially assumes that the test results are independent from another (which is not true). Finding a
way to take this into account in the loss function may further improve the achievable accuracy, but
such an extension may not be straightforward.
Acknowledgments
SM and JMM were supported by NWO, the Netherlands Organization for Scientific Research
(VIDI grant 639.072.410). SM was also supported by the Dutch programme COMMIT/ under the
Data2Semantics project. TC was supported by NWO grant 612.001.202 (MoCoCaDi), and EU-FP7
grant agreement n.603016 (MATRICS). We also thank Sofia Triantafillou for her feedback, especially
for pointing out the correct way to read ancestral relations from a PAG.
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
Raf
Mek
500 PLCg
PIP2
PIP3
0
Erk
Akt
PKA
?500 PKC
p38
?1000 JNK
under different inputs. The output for boostrapped anytime FCI is similar, so we report it only in
the Supplementary Material. Algorithms like (anytime) (C)FCI can only use the independences in
the observational data as input and therefore miss the strongest signal, weighted ancestral relations,
which are obtained by comparing interventional with observational data. In the Supplementary
Material, we compare also with other methods ([17], [15]). Interestingly, as we show there, our
results are similar to the best acyclic model reconstructed by the score-based method from [15]. As for
other constraint-based methods, HEJ is computationally unfeasible in this setting, while COMBINE
assumes perfect interventions (while this dataset contains mostly activity interventions).
8
A
Ra
Mek
PLCg
PIP2
PIP3
Erk
Ak
PKA
PKC
p38
JNK
500
PKC
Figure 3: Results for flow cytometry dataset. Each matrix represents the ancestral
relations, where
p38
JNK
each row represents a cause and each column an effect. The colors encode
the confidence levels:
green is positive, black is unknown, while red is negative. The intensity of the color represents the
degree of confidence. For example, ACI identifies MEK to be a cause of RAF with high confidence.
6
Ra
Mek
PLCg
PIP2
PIP3
Erk
0
Ak
PKA
?500 PKC
p38
?1000 JNK
500
1000
Raf
Mek
PLCg
PIP2
PIP3
Erk
Akt
PKA
PKC
p38
JNK
Mek
(b) ACI (input: weighted ancestral relations)
Raf
Mek
PLCg
PIP2
PIP3
Erk
Akt
PKA
PKC
p38
JNK
Raf
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
JNK
p38
PKC
PKA
Akt
Erk
PIP3
PIP2
PLCg
Mek
Raf
(a) Bootstrapped (100) anytime CFCI (input: independences of order ? 1)
Raf
Mek
PLCg
PIP2
PIP3
Erk
Akt
PKA
PKC
p38
JNK
1000
Raf
Mek
500 PLCg
PIP2
PIP3
Erk
0
Akt
PKA
?500 PKC
p38
?1000 JNK
ACI (ancestral r. +
indep.
<= 1)
ACI
(causes)
ACI (ancestral relations)
BCFCI (indep. <= 1)
Raf
Mek
PLCg
PIP2
PIP3
Erk
Akt
PKA
PKC
p38
JNK
Weighted indep(i,j)
1000
Raf
Mek
PLCg
PIP2
PIP3
Erk
Akt
PKA
PKC
p38
JNK
1000
Ra
Mek
PLCg
PIP2
PIP3
0
Erk
Ak
PKA
?500 PKC
p38
?1000 JNK
500
References
[1] G. Borboudakis and I. Tsamardinos. Incorporating causal prior knowledge as path-constraints in Bayesian
networks and Maximal Ancestral Graphs. In ICML, pages 1799?1806, 2012.
[2] T. Claassen and T. Heskes. A logical characterization of constraint-based causal discovery. In UAI, pages
135?144, 2011.
[3] T. Claassen and T. Heskes. A Bayesian approach to constraint-based causal inference. In UAI, pages
207?216, 2012.
[4] D. Colombo, M. H. Maathuis, M. Kalisch, and T. S. Richardson. Learning high-dimensional directed
acyclic graphs with latent and selection variables. The Annals of Statistics, 40(1):294?321, 2012.
[5] D. Eaton and K. Murphy. Exact Bayesian structure learning from uncertain interventions. In AISTATS,
pages 107?114, 2007.
[6] M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub. Clingo = ASP + control: Extended report.
Technical report, University of Potsdam, 2014. http://www.cs.uni-potsdam.de/wv/pdfformat/
gekakasc14a.pdf.
[7] M. Gelfond. Answer sets. In Handbook of Knowledge Representation, pages 285?316. 2008.
[8] A. Hyttinen, F. Eberhardt, and M. J?rvisalo. Constraint-based causal discovery: Conflict resolution with
Answer Set Programming. In UAI, pages 340?349, 2014.
[9] M. Kalisch and P. B?hlmann. Estimating high-dimensional directed acyclic graphs with the PC-algorithm.
Journal of Machine Learning Research, 8:613?636, 2007.
[10] M. Kalisch, M. M?chler, D. Colombo, M. Maathuis, and P. B?hlmann. Causal inference using graphical
models with the R package pcalg. Journal of Statistical Software, 47(1):1?26, 2012.
[11] D. J. Kleitman and B. L. Rothschild. Asymptotic enumeration of partial orders on a finite set. Transactions
of the American Mathematical Society, 205:205?220, 1975.
[12] V. Lifschitz. What is Answer Set Programming? In AAAI, pages 1594?1597, 2008.
[13] D. Margaritis and F. Bromberg. Efficient Markov network discovery using particle filters. Computational
Intelligence, 25(4):367?394, 2009.
[14] F. Markowetz, S. Grossmann, and R. Spang. Probabilistic soft interventions in conditional Gaussian
networks. In AISTATS, pages 214?221, 2005.
[15] J. M. Mooij and T. Heskes. Cyclic causal discovery from continuous equilibrium data. In UAI, pages
431?439, 2013.
[16] J. Pearl. Causality: models, reasoning and inference. Cambridge University Press, 2009.
[17] J. Peters, P. B?hlmann, and N. Meinshausen. Causal inference using invariant prediction: identification
and confidence intervals. Journal of the Royal Statistical Society, Series B, 8(5):947?1012, 2015.
[18] J. Ramsey, J. Zhang, and P. Spirtes. Adjacency-faithfulness and conservative causal inference. In UAI,
pages 401?408, 2006.
[19] D. Rothenh?usler, C. Heinze, J. Peters, and N. Meinshausen. BACKSHIFT: Learning causal cyclic graphs
from unknown shift interventions. In NIPS, pages 1513?1521, 2015.
[20] A. Roumpelaki, G. Borboudakis, S. Triantafillou, and I. Tsamardinos. Marginal causal consistency in
constraint-based causal learning. In Causation: Foundation to Application Workshop, UAI, 2016.
[21] K. Sachs, O. Perez, D. Pe?er, D. Lauffenburger, and G. Nolan. Causal protein-signaling networks derived
from multiparameter single-cell data. Science, 308:523?529, 2005.
[22] P. Spirtes. An anytime algorithm for causal inference. In AISTATS, pages 121?128, 2001.
[23] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT press, 2000.
[24] J. Tian and J. Pearl. Causal discovery from changes. In UAI, pages 512?521, 2001.
[25] S. Triantafillou and I. Tsamardinos. Constraint-based causal discovery from multiple interventions over
overlapping variable sets. Journal of Machine Learning Research, 16:2147?2205, 2015.
[26] J. Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders
and selection bias. Artifical Intelligence, 172(16-17):1873?1896, 2008.
9
| 6266 |@word mild:3 version:8 middle:1 achievable:1 seems:2 open:1 calculus:1 closure:1 willing:1 crucially:2 cyclic:2 series:2 score:15 contains:1 bootstrapped:16 interestingly:2 outperforms:1 existing:2 ramsey:1 recovered:1 com:2 comparing:1 plcg:19 gmail:1 assigning:1 happen:1 enables:1 plot:4 v:1 greedy:1 discovering:2 half:1 intelligence:2 signalling:1 core:1 coarse:2 provides:1 completeness:5 p38:19 characterization:1 org:2 simpler:1 zhang:2 five:1 mathematical:1 direct:7 descendant:1 prove:3 combine:4 sacrifice:2 notably:1 ra:3 simulator:1 inspired:1 decreasing:1 cpu:1 enumeration:1 cardinality:1 solver:1 spain:1 discover:1 moreover:1 provided:1 underlying:2 mek:23 project:1 null:2 estimating:1 what:1 interpreted:1 substantially:1 minimizes:1 erk:20 finding:2 bootstrapping:2 every:1 fat:1 control:1 grant:3 intervention:18 omit:1 kalisch:4 causally:1 before:1 positive:1 encoding:6 ak:3 path:3 black:1 meinshausen:2 sara:2 challenging:3 limited:3 tian:1 directed:9 practical:3 unique:1 acknowledgment:1 vu:1 borderline:1 practice:1 likelihoodbased:1 bootstrap:1 fci:31 procedure:1 signaling:1 axiom:1 thought:1 significantly:4 convenient:2 confidence:19 word:1 protein:3 jmm:1 get:2 cannot:2 unfeasible:1 selection:4 context:2 applying:2 seminal:1 www:1 equivalent:1 map:1 straightforward:1 resolution:2 formulate:1 simplicity:1 correcting:1 rule:18 spang:1 handle:1 traditionally:1 resp:1 target:2 annals:1 exact:1 programming:4 us:2 hypothesis:3 agreement:1 showcase:1 observed:2 wj:4 indep:3 eu:1 substantial:1 intuition:1 complexity:1 skeleton:1 nonadjacent:1 creates:1 upon:1 borboudakis:2 claassen:3 easily:3 indirect:1 various:2 vidi:1 radboud:1 admg:1 refined:1 h0:2 quite:2 supplementary:9 widely:1 otherwise:3 nolan:1 ability:1 soundness:5 statistic:1 richardson:1 commit:1 multiparameter:1 transform:1 timeout:1 advantage:5 sequence:2 propose:6 maximal:1 zoomed:2 combining:1 flexibility:1 achieve:1 pka:18 schaub:1 intervening:1 scalability:4 exploiting:3 produce:3 generating:1 perfect:2 illustrate:2 measured:1 ij:6 strong:1 implemented:4 c:2 indicate:1 differ:1 closely:1 correct:2 filter:1 observational:16 material:9 adjacency:1 assign:1 preliminary:2 extension:1 correction:1 hold:3 equilibrium:1 predict:1 eaton:1 pointing:1 bromberg:1 major:2 achieves:1 estimation:3 applicable:1 combinatorial:1 nwo:2 vulnerability:1 agrees:1 successfully:2 weighted:12 reflects:1 minimization:2 mit:1 inhibitor:1 always:1 gaussian:3 super:4 rather:1 pn:1 asp:5 colombo:2 earliest:1 encode:2 derived:2 improvement:1 rank:2 likelihood:1 indicates:1 check:1 greatly:2 contrast:3 rothschild:1 baseline:1 am:1 detect:1 sense:1 inference:10 helpful:1 dependent:1 typically:1 her:1 relation:46 manipulating:2 ancestor:2 going:1 semantics:1 arg:1 issue:3 orientation:1 overall:1 denoted:3 art:3 integration:1 constrained:4 marginal:2 hyttinen:2 represents:6 icml:1 future:1 others:2 np:1 report:5 quantitatively:1 few:1 oblivious:1 causation:2 randomly:2 zoom:2 murphy:1 negation:1 organization:1 interest:2 huge:2 evaluation:5 violation:2 bracket:1 nl:2 yielding:1 pc:4 uppercase:1 perez:1 edge:2 explosion:1 necessary:2 partial:6 reflexivity:1 causal:73 minimal:5 uncertain:1 instance:2 column:4 modeling:1 soft:2 disadvantage:1 hlmann:4 introducing:1 too:1 reported:1 answer:4 perturbed:1 synthetic:6 combined:1 confounders:2 confident:1 considerably:1 ancestral:56 probabilistic:1 reflect:1 satisfied:2 aaai:1 reconstructs:1 choose:1 american:1 account:5 suggesting:1 de:1 gebser:1 includes:1 matter:1 identifiably:2 explicitly:1 ranking:1 h1:2 hej:23 red:1 start:1 recover:2 complicated:2 raf:18 square:1 accuracy:10 kaufmann:1 correspond:1 ofthe:1 weak:1 bayesian:10 identification:1 accurately:1 notoriously:1 finer:1 strongest:1 pags:1 naturally:1 proof:1 associated:1 recovers:1 dataset:4 logical:2 recall:9 knowledge:7 anytime:15 improves:2 color:2 higher:4 tom:1 reflected:1 improved:2 formulation:2 done:1 though:2 furthermore:2 correlation:3 hand:1 sketch:1 expressive:2 overlapping:3 incrementally:1 heinze:1 scientific:2 grows:3 effect:5 contain:1 true:5 assigned:1 read:1 spirtes:3 illustrated:1 deal:1 transitivity:1 recurrence:1 ambiguous:1 pdf:1 complete:4 demonstrate:2 reasoning:7 postprocessing:1 novel:2 recently:3 common:1 conditioning:2 exponentially:3 refer:2 significant:1 cambridge:1 dag:12 tuning:1 consistency:4 heskes:3 similarly:1 pointed:1 kaminski:1 particle:1 language:2 had:1 reliability:10 stable:1 deduce:2 add:3 multivariate:1 pkc:19 optimizing:1 optimizes:1 pcalg:2 certain:3 wv:1 inconsistency:1 scoring:7 additional:1 fortunately:1 recalibrating:1 converge:2 redundant:1 signal:1 ii:2 multiple:1 desirable:1 sound:4 reduces:2 technical:2 faster:1 offer:1 long:1 divided:1 feasibility:2 unsatisfiable:1 prediction:22 essentially:1 dutch:1 represent:3 tral:1 cell:1 proposal:1 background:5 addition:3 addressed:2 interval:2 source:1 strict:1 pooling:1 simulates:1 flow:2 seem:1 call:1 odds:1 presence:2 synthetically:1 constraining:1 easy:1 wn:1 enough:1 variety:1 independence:40 marginalization:2 impediment:1 idea:3 shift:1 whether:3 peter:2 cause:6 generally:1 useful:1 clear:1 detailed:3 tsamardinos:3 netherlands:1 repeating:1 induces:2 category:1 http:4 generate:1 outperform:2 estimated:1 disjoint:3 correctly:1 discrete:4 write:1 triantafillou:3 express:1 redundancy:2 key:1 four:1 threshold:3 nevertheless:1 achieving:1 interventional:7 neither:2 graph:21 asymptotically:4 sum:3 convert:1 realworld:1 package:2 letter:1 run:1 throughout:1 family:1 almost:1 decide:1 reasonable:1 separation:1 decision:2 comparable:1 bound:1 rvisalo:2 identifiable:5 activity:2 oracle:4 adapted:1 nontrivial:1 constraint:22 x2:1 software:1 encodes:1 simulate:1 min:3 formulating:1 extremely:1 performing:1 strangely:1 conjecture:3 speedup:2 glymour:1 developing:1 according:4 combination:5 smaller:3 reconstructing:2 making:1 intuitively:2 invariant:1 pr:7 computationally:2 scheines:1 turn:1 mechanism:1 needed:2 know:1 ascending:1 fp7:1 end:1 available:1 operation:4 lauffenburger:1 observe:1 appropriate:1 frequentist:7 alternative:1 slower:1 denotes:1 top:1 assumes:2 graphical:1 joris:1 matric:1 build:1 especially:2 society:2 already:3 parametric:1 primary:1 dependence:13 traditional:2 unable:1 mapped:1 simulated:1 thank:1 gelfond:1 seven:1 trivial:1 reason:1 boldface:1 declarative:1 ru:1 modeled:1 insufficient:1 ratio:1 difficult:1 unfortunately:1 mostly:1 statement:14 margaritis:1 nijmegen:1 negative:1 ba:1 implementation:1 unknown:3 perform:3 allowing:1 markov:4 datasets:1 discarded:1 finite:2 sm:2 extended:2 incorporate:2 misspecification:1 antisymmetry:1 cytometry:2 intensity:1 inferred:1 introduced:1 pair:1 connection:7 faithfulness:5 optimized:1 conflict:2 distinction:1 conflicting:1 potsdam:2 barcelona:1 boost:1 nip:2 pearl:2 address:2 usually:2 challenge:1 max:2 green:1 royal:1 suitable:1 demanding:1 natural:1 rely:1 attach:1 indicator:1 representing:1 scheme:6 improve:4 github:1 identifies:1 transitive:1 extract:1 prior:3 discovery:18 mooij:3 asymptotic:4 loss:7 grossmann:1 mixed:2 acyclic:9 magliacane:2 remarkable:1 foundation:2 admgs:4 degree:1 consistent:4 row:3 penalized:1 surprisingly:1 supported:3 drastically:2 bias:3 allow:5 wide:1 pag:2 ghz:1 benefit:1 feedback:2 curve:2 xn:1 world:2 projected:1 coincide:1 programme:1 far:1 ec:1 transaction:1 reconstructed:1 approximate:1 uni:1 logic:5 uai:7 sat:1 handbook:1 aci:39 search:6 latent:8 continuous:1 additionally:3 promising:1 eberhardt:2 improving:1 complex:1 necessarily:2 uva:1 aistats:3 significance:2 spread:1 main:1 sachs:1 noise:1 arise:1 sofia:1 pip3:19 fair:1 x1:1 causality:1 elaborate:1 slow:1 akt:16 precision:9 exponential:3 candidate:1 intervened:1 pe:1 weighting:4 ix:1 grained:3 formula:1 theorem:4 erroneous:1 discarding:1 er:1 list:2 exists:1 incorporating:1 workshop:1 false:2 adding:2 magnitude:3 execution:9 backshift:1 illustrates:1 jnk:19 sorting:1 suited:1 specialised:1 tc:1 logarithmic:1 conveniently:1 amsterdam:3 expressed:1 ordered:1 pip2:18 maxw:1 applies:1 corresponds:1 satisfies:1 conditional:13 sorted:1 formulated:2 fisher:1 change:7 hard:1 specifically:1 infinite:1 reducing:1 except:1 miss:1 lemma:2 conservative:3 contradictory:1 experimental:7 maathuis:2 support:2 violated:2 artifical:1 evaluate:2 tested:1 |
5,821 | 6,267 | Regularized Nonlinear Acceleration
Damien Scieur
INRIA & D.I., UMR 8548,
?cole Normale Sup?rieure, Paris, France.
[email protected]
Alexandre d?Aspremont
CNRS & D.I., UMR 8548,
?cole Normale Sup?rieure, Paris, France.
[email protected]
Francis Bach
INRIA & D.I., UMR 8548,
?cole Normale Sup?rieure, Paris, France.
[email protected]
Abstract
We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average
of the iterates produced by any optimization method. The weights in this average
are computed via a simple and small linear system, whose solution can be updated
online. This acceleration scheme runs in parallel to the base algorithm, providing improved estimates of the solution on the fly, while the original optimization
method is running. Numerical experiments are detailed on classical classification
problems.
1
Introduction
Suppose we want to solve the following optimization problem
min f (x)
x?Rn
(1)
in the variable x ? Rn , where f (x) is strongly convex with respect to the Euclidean norm with
parameter ?, and has a Lipschitz continuous gradient with parameter L with respect to the same norm.
This class of function is often encountered, for example in regression where f (x) is of the form
f (x) = L(x) + ?(x),
where L(x) is a smooth convex loss function and ?(x) is a smooth strongly convex penalty function.
Assume we solve this problem using an iterative algorithm of the form
xi+1 = g(xi ),
for i = 1, ..., k,
(2)
where xi ? Rn and k the number of iterations. Here, we will focus on the problem of estimating the
solution to (1) by tracking only the sequence of iterates xi produced by an optimization algorithm.
This will lead to an acceleration of the speed of convergence, since we will be able to extrapolate
more accurate solutions without any calls to the oracle g(x).
Since the publication of Nesterov?s optimal first-order smooth convex minimization algorithm [1], a
significant effort has been focused on either providing more explicit interpretable views on current
acceleration techniques, or on replicating these complexity gains using different, more intuitive
schemes. Early efforts were focused on directly extending the original acceleration result in [1] to
broader function classes [2], allow for generic metrics, line searches or simpler proofs [5, 6], produce
adaptive accelerated algorithms [7], etc. More recently however, several authors [8, 9] have started
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
using classical results from control theory to obtain numerical bounds on convergence rates that
match the optimal rates. Others have studied the second order ODEs obtained as the limit for small
step sizes of classical accelerated schemes, to better understand their convergence [10, 11]. Finally,
recent results have also shown how to wrap classical algorithms in an outer optimization loop, to
accelerate convergence [12] and reach optimal complexity bounds.
Here, we take a significantly different approach to convergence acceleration stemming from classical
results in numerical analysis. We use the iterates produced by any (converging) optimization algorithm,
and estimate the solution directly from this sequence, assuming only some regularity conditions on the
function to minimize. Our scheme is based on the idea behind Aitken?s ?2 algorithm [13], generalized
as the Shanks transform [14], whose recursive formulation is known as the ?-algorithm [15] (see e.g.
[16, 17] for a survey). In a nutshell, these methods fit geometrical models to linearly converging
sequences, then extrapolate their limit from the fitted model.
In a sense, this approach is more statistical in nature. It assumes an approximately linear model
holds for iterations near the optimum, and estimates this model using the iterates. In fact, Wynn?s
algorithm [15] is directly connected to the Levinson-Durbin algorithm [18, 19] used to solve Toeplitz
systems recursively and fit autoregressive models (the Shanks transform solves Hankel systems, but
this is essentially the same problem [20]). The key difference here is that estimating the autocovariance
operator is not required, as we only focus on the limit. Moreover, the method presents strong links
with the conjugate gradient when applied to unconstrained quadratic optimization.
We start from a slightly different formulation of these techniques known as minimal polynomial
extrapolation (MPE) [17, 21] which uses the minimal polynomial of the linear operator driving
iterations to estimate the optimum by nonlinear averaging (i.e., using weights in the average which are
nonlinear functions of the iterates). So far, for all the techniques cited above, no proofs of convergence
of these estimates were given in the case where the iterates made the estimation process unstable.
Our contribution here is to add a regularization in order to produce explicit bounds on the distance to
optimality by controlling the stability through the regularization parameter, thus explicitly quantifying the acceleration provided by these techniques. We show in several numerical examples that
these stabilized estimates often speed up convergence by an order of magnitude. Furthermore this
acceleration scheme thus runs in parallel to the original algorithm, providing improved estimates of
the solution on the fly, while the original method is progressing.
The paper is organized as follows. In section 2.1 we recall basic results behind MPE for linear
iterations and we will introduce in section 2.2 a formulation of the approximate version of MPE and
make a link with the conjugate gradient method. Then, in section 2.3, we generalize these results to
generic nonlinear iterations and show, in section 2.4, how to fully control the impact of nonlinearity.
We use these results to derive explicit bounds on the acceleration performance of our estimates.
2
Approximate Minimal Polynomial Extrapolation
In what follows, we recall the key arguments behind minimal polynomial extrapolation (MPE) as
derived in [22] or also [21]. We also explain a variant called approximate minimal polynomial
extrapolation (AMPE) which allows to control the number of iterates used in the extrapolation, hence
reduces its computational complexity. We begin by a simple description of the method for linear
iterations, then extend these results to the generic nonlinear case. Finally, we fully characterize the
acceleration factor provided by a regularized version of AMPE, using regularity properties of the
function f (x), and the result of a Chebyshev-like, tractable polynomial optimization problem.
2.1
Linear Iterations
Here, we assume that the iterative algorithm in (2) is in fact linear, with
xi = A(xi?1 ? x? ) + x? ,
?
n?n
n
(3)
where A ? R
(not necessarily symmetric) and x ? R . We assume that 1 is not an eigenvalue
of A, implying that (3) admits a unique fixed point x? . Moreover, if we assume that kAk2 < 1, then
xk converge to x? at rate kxk ? x? k2 ? kAkk2 kx0 ? x? k. We now recall the minimal polynomial
extrapolation (MPE) method as described in [21], starting with the following definition.
2
Definition 2.1 Given A ? Rn?n , s.t. 1 is not an eigenvalue of A and v ? Rn , the minimal polynomial
of A with respect to the vector v is the lowest degree polynomial p(x) such that
p(A)v = 0,
p(1) = 1.
Note that the degree of p(x) is always less than n and the condition p(1) = 1 makes p unique. Notice
that because we assumed that 1 is not an eigenvalue of A, having p(1) = 1 is not restrictive since
we can normalize each minimal polynomial with the sum of its coefficients (see Lemma A.1 in
the supplementary material). Given an initial iterate x0 , MPE starts by forming a matrix U whose
columns are the increments xi+1 ? xi , with
ui = xi+1 ? xi = (A ? I)(xi ? x? ) = (A ? I)Ai (x0 ? x? ).
(4)
Now, let p be the minimal polynomial of A with respect to the vector u0 (where p has coefficients ci
and degree d), and U = [u0 , u1 , ..., ud ]. So
Pd
Pd
Pd
i
p(1) = i=0 ci = 1.
(5)
i=0 ci ui =
i=0 ci A u0 = p(A)u0 = 0 ,
P
We can thus solve the system U c = 0, i ci = 1 to find p. In this case, the fixed point x? can be
computed exactly as follows
Pd
Pd
i
?
0 = i=0 ci Ai u0 =
i=0 ci A (A ? I)(x0 ? x )
Pd
Pd
= (A ? I) i=0 ci Ai (x0 ? x? ) = (A ? I) i=0 ci (xi ? x? ).
Hence, using the fact that 1 is not an eigenvalue of A and p(1) = 1,
Pd
Pd
?
(A ? I) i=0 ci (xi ? x? ) = 0 ?
i=0 ci (xi ? x ) = 0
Pd
?
i=0 ci xi
= x? .
This means that x? is obtained by averaging iterates using the coefficients in c. The averaging in this
case is called nonlinear, since the coefficients of c vary with the iterates themselves.
2.2
Approximate Minimal Polynomial Extrapolation (AMPE)
Suppose now that we only compute a fraction of the iterates xi used in the MPE procedure. While the
number of iterates k might be smaller than the degree of the minimal polynomial of A with respect
to u0 , we can still try to make the quantity pk (A)u0 small, where pk (x) is now a polynomial of degree
at most k. The corresponding difference matrix U = [u0 , u1 , ..., uk ] ? Rn?(k+1) is rectangular.
This is also known as the Eddy-Me?ina method [3, 4] or reduced rank extrapolation with arbitrary k
(see [21, ?10]). The objective here is similar to (5), but the system is now overdetermined because
k < deg(P ). We will thus choose c to make kU ck2 = kp(A)u0 k2 , for some polynomial p such that
p(1) = 1, as small as possible, which means solving for
c? , argmin kU ck2
s.t. 1T c = 1
(AMPE)
in the variable c ? Rk+1 . The optimal value kU c? k2 of this problem is decreasing with k, satisfies kU c? k2 = 0 when k is greater than the degree of the minimal polynomial, and controls the
approximation error in x? with equation (4). Setting ui = (A ? I)(xi ? x? ), we have
Pk
Pk
k i=0 c?i xi ? x? k2 = k(I ? A)?1 i=0 c?i ui k2
?
(I ? A)?1
kU c? k2 .
2
?
We can get a crude bound on kU c k2 from Chebyshev polynomials, using only an assumption on
the range of the spectrum of the matrix A. Assume A symmetric, 0 A ?I ? I and deg(p) ? k.
Indeed,
kU c? k2 = kp? (A)u0 k2 ? ku0 k2 min kp(A)k2 ? ku0 k2 min
max
p:p(1)=1 A:0A?I
p:p(1)=1
kp(A)k2 ,
(6)
where p? is the polynomial with coefficients c? . Since A is symmetric, we have A = Q?QT where
Q is unitary. We can thus simplify the objective function:
max
A:0A?I
kp(A)k2 =
max
?:0??I
kp(?)k2 =
3
max
?:0??I
max |p(?i )| =
i
max |p(?)|.
?:0????
We thus have
kU c? k2 ? ku0 k2 min
max |p(?)|.
p:p(1)=1 ?:0????
So we must find a polynomial which takes small values in the interval [0, ?]. However, Chebyshev
polynomials are known to be polynomials for which the maximal value in the interval [0, 1] is
the smallest. Let Ck be the Chebyshev polynomial of degree k. By definition, Ck (x) is a monic
polynomial1 which solves
Ck (x) = argmin
max |p(x)|.
p:p is monic x:x?[?1,1]
We can thus use a variant of Ck (x) in order to solve the minimax problem
min
max |p(?)|.
(7)
p:p(1)=1 ?:0????
The solution of this problem is given in [23] and admits an explicit formulation:
T (x) =
Ck (t(x))
,
Ck (t(1))
t(x) =
2x ? ?
.
?
Note that t(x) is simply a linear mapping from interval [0, ?] to [?1, 1]. Moreover,
min
max |p(?)| =
p:p(1)=1 ?:0????
max |Tk (?)| = |Tk (?)| =
?:0????
2? k
,
1 + ? 2k
(8)
where ? is
? = (1 ?
?
1 ? ?)/(1 +
?
?
1 ? ?) < ?.
(9)
?
Since ku0 k2 = k(A ? I)(x0 ? x )k2 ? kA ? Ik2 kx0 ? x k, we can bound (6) by
kU c? k2 ? ku0 k2 min
max |p(?)| ? kA ? Ik2
p:p(1)=1 ?:0????
2? k
kx0 ? x? k2 .
1 + ? 2k
This leads to the following proposition.
Proposition 2.2 Let A be symmetric, 0 A ?I ? I and ci be the solution of (AMPE). Then
P
k
2? k
?
(10)
i=0 c?i xi ? x?
? ?(A ? I) 1+?
2k kx0 ? x k2 ,
2
where ?(A ? I) is the condition number of the matrix A ? I and ? is defined in (9).
Note that, when solving quadratic optimization problems, the rate in this bound matches that obtained
using the optimal method in [6]. Also, the bound on the rate of convergence of this method is exactly
the one of the conjugate gradient with an additional factor ?(A ? I).
?
Remark: This method presents a strong link with the conjugate gradient. Denote kvkB = v T Bv
the norm induced by the definite positive matrix B. By definition, at the k-th iteration, the conjugate
gradient computes an approximation s of x? which follows
s = argmin kx ? x? kA
s.t. x ? Kk ,
where Kk = {Ax? , A2 x? , ..., Ak x? } is called a Krylov subspace. Since x ? Kk , we have that x
Pk
is a linear combination of the element in Kk , so x = i=1 ci Ai x? = q(A)x? , where q(x) is a
polynomial of degree k and q(0) = 0. So conjugate gradient solves
s = argminq:q(0)=0 kq(A)x? ? x? kA = argminq?:?q(0)=0 k?
q (A)x? kA ,
which is very similar to equation (AMPE). However, while conjugate gradient has access to an oracle
which gives the result of the product between matrix A and any vector v, the AMPE procedure can
only use the iterations produced by (3) (meaning that the AMPE procedure does not need to know A).
Moreover, we analyze the convergence of AMPE in another norm (k ? k2 instead of k ? kA ). These
two reasons explain why a condition number appears in the rate of convergence of AMPE (10).
1
A monic polynomial is a univariate polynomial in which the coefficient of highest degree is equal to 1.
4
2.3
Nonlinear Iterations
We now go back to the general case where the iterative algorithm is nonlinear, with
x
?i+1 = g(?
xi ), for i = 1, ..., k,
(11)
where x
?i ? Rn and the function g has a symmetric Jacobian at point x? . We also assume that the
method has a unique fixed point written x? and linearize these iterations around x? , to get
x
?i ? x? = A(?
xi?1 ? x? ) + ei ,
(12)
where A is now the Jacobian matrix (i.e., the first derivative) of g taken at the fixed point x? and
ei ? Rn is a second order error term kei k2 = O(k?
xi?1 ? x? k22 ). Note that, by construction, the
linear and nonlinear models share the same fixed point x? . We write xi the iterates that would be
obtained using the asymptotic linear model (starting at x0 )
xi ? x? = A(xi?1 ? x? ).
? from their
Running the algorithm described in (11), we thus observe the iterates x
?i and build U
?
differences. As in (AMPE) we then compute c? using matrix U and finally estimate
Pk
x
?? = i=0 c?i x
?i .
?
In this case, our estimate for x is based on the coefficient c?, computed using the iterates x
?i . We will
now decompose the error made by the estimation by comparing it with the estimation which comes
from
P the linear model:
P
P
P
k
k
k
k
?i ? x?
?
i=0 (?
ci ? ci )xi
+
i=0 c?i (?
xi ? xi )
+
i=0 ci xi ? x?
. (13)
i=0 c?i x
2
2
2
2
This expression shows us that the precision is comparable to the precision of the AMPE process in
the linear case (third term) with some perturbation. Also, if kei k2 is small then kxi ? x
?i k2 is small
as well. But we need more information about kck2 and k?
c ? ck2 if we want to go further.
We now show the following proposition computing the perturbation ?c = (?
c? ? c? ) of the optimal
? ? U . It will allow us to bound the first term on the
solution of (AMPE), c? , induced by E = U
?T U
? ?U T U .
right-hand side of (13) (see proof A.2 in the Appendix). For simplicity, we will use P = U
Proposition 2.3 Let c? be the optimal solution to (AMPE)
c? = argmin kU ck2
1T c=1
? = U + E and write c? + ?c the perturbed
for some matrix U ? R
. Suppose U becomes U
T
? U
? and the perturbation matrix P = U
?T U
? ? U T U . Then,
solution to (AMPE). Let M = U
M ?1 11T
?c = ? I ? T ?1
M ?1 P c? .
(14)
1 M 1
n?k
We see here that the perturbation can be potentially large. Even if kc? k2 and kP k2 can be potentially
small, kM ?1 k2 is huge in general. It can be shown that U T U (the square of a Krylov-like matrix)
presents an exponential condition number (see [24]) because the minimal eigenvalue decays very fast.
Moreover, the eigenvalues are perturbed by P , leading to a potential huge perturbation ?c, especially
if kP k2 is comparable (or bigger) to ?min (U T U ).
2.4
Regularized AMPE
The condition number of the matrix U T U in problem (AMPE) can be arbitrary large. Indeed, this
condition number is related to the one of Krylov matrices which has been proved in [24] to be
exponential in k. By consequence, this conditioning problem coupled with nonlinear errors lead to
highly unstable solutions c? (which we observe in our experiments). We thus study a regularized
formulation of problem (AMPE), which reads
minimize cT (U T U + ?I)c
(RMPE)
subject to 1T c = 1
The solution of this problem may be computed with a linear system, and the regularization parameter
controls the norm of the solution, as shown in the following Lemma (see proof A.3 in Appendix).
5
Lemma 2.4 Let c?? be the optimal solution of problem (RMPE). Then
r
? + kU k22
(U T U + ?I)?1 1
?
?
and
kc
k
?
.
c? = T T
2
?
1 (U U + ?I)?1 1
k?
(15)
This allows us to obtain the following corollary extending Proposition 2.3 to the regularized AMPE
problem in (RMPE), showing that the perturbation of c is now controlled by the regulaization
parameter ?.
Corollary 2.5 Let c?? , defined in (15), be the solution of problem (RMPE). Then the solution of
? = U + E is given by c? + ?c? where
problem (RMPE) for the perturbed matrix U
?
?c? = ?W M??1 P c?? = ?M??1 W T P c??
and
k?c?? k2 ? kP?k2 kc?? k2 ,
M ?1 11T
where M? = (U T U + P + ?I) and W = I ? 1T?M ?1 1 is a projector of rank k ? 1.
?
These results lead us to the following simple algorithm.
Algorithm 1 Regularized Approximate Minimal Polynomial Extrapolation (RMPE)
Input: Sequence {x0 , x1 , ..., xk+1 }, parameter ? > 0
Compute U = [x1 ? x0 , ..., xk+1 ? xk ]
Solve the linear system (U T U + ?I)z = 1
Set c = z/(z T 1)
Pk
?
Output:
i=0 ci xi , the approximation of the fixed point x
The computational complexity (with online updates or in batch mode) of the algorithm is O(nk 2 )
and some strategies (batch and online) are discussed in the Appendix A.3. Note that the algorithm
never calls the oracle g(x). It means that, in an optimization context, the acceleration does not require
f (x) or f 0 (x) to compute the extrapolation. Moreover, it does not need a priori information on the
function, for example L and ? (unlike Nesterov?s method).
2.5
Convergence Bounds on Regularized AMPE
To fully characterize the convergence of our estimate sequence, we still need to bound the last term
Pk
on the right-hand side of (13), namely k i=0 ci xi ? x? k2 . A coarse bound can be provided using
Chebyshev polynomials, however the norm of the Chebyshev?s coefficients grows exponentially as k
grows. Here we refine this bound to better control the quality of our estimate.
Let g(x? ) ?I. Consider the following Chebyshev-like optimization problem, written
2
S(k, ?) ,
min
max ((1 ? x)q(x)) + ?kqk22 ,
{q?Rk [x]: q(1)=1}
x?[0,?]
(16)
where Rk [x] is the ring of polynomials of degree at most k and q ? Rk+1 is the vector of coefficients
of the polynomial q(x). This problem can be solved exactly using a semi-definite solver because it
can be reduced to a SDP program (see Appendix A.4 for the details of the reduction). Our main result
below shows how S(k, ?) bounds the error between our estimate of the optimum constructed using
the iterates x
?i in (RMPE) and the optimum x? of problem (1).
? = [x0 , x
? and scalar
Proposition 2.6 Let matrices X = [x0 , x1 , ..., xk ], X
?1 , ..., x
?k ], E = (X ? X)
? = k(A ? I)?1 k2 . Suppose c??? solves problem (RMPE)
?T U
? + ?I)?1 1
?T U
? + ?I)c
(U
minimize cT (U
?
?
c
?
=
(17)
?
?T U
? + ?I)?1 1
subject to 1T c = 1
1T (U
? ? Rn?(k+1) . Assume A symmetric with 0 A ? I.
in the variable c ? Rk+1 , with parameters U
Then
2
2 !12
1
1
kP
k
kP
k
2
2
? c??? ?x? k2 ? ?2 +
kX
1+
kEk2 + ? ?
S(k, ?/kx0 ? x? k22 ) 2kx0 ?x? k2 ,
?
?
2 ?
with P is defined in Corollary 2.5 and S(k, ?) is defined in (16).
6
1
We have that S(k, ?/kx0 ? x? k22 ) 2 is similar to the value Tk (?) (see (8)) so our algorithm achieves a
rate similar to the Chebyshev?s acceleration up to some multiplicative scalar. We thus need to choose
1
? so that this multiplicative scalar is not too high (while keeping S(k, ?/kx0 ? x? k22 ) 2 small).
We can analyze the behavior of the bound if we start close to the optimum. Assume
kEk2 = O(kx0 ? x? k22 ),
kU k2 = O(kx0 ? x? k2 )
?
kP k2 = O(kx0 ? x? k32 ).
This case is encountered when minimizing a smooth strongly convex function with Lipchitzcontinuous Hessian using fixed-step gradient method (this case is discussed in details in the Appendix,
section A.6). Also, let ? = ?kP k2 for ? > 0 and kx0 ? x? k small. We can thus approximate the
right parenthesis of the bound by
!
p
p
kP k2
? kP k2
kP k2
?
=
lim
kEk2 + ? ?
lim
kEk2 + ? ?
=
.
kx?x? k2 ?0
kx?x? k2 ?0
2 ?
2 ?
2 ?
Therefore, the bound on the precision of the extrapolation is approximately equal to
! s
1 2 1/2
)
(1
+
?kP k2
?
?
?
?
S k,
kx0 ? x? k2
kX c?? ? x k2 . ? 1 +
4? 2
kx0 ? x? k22
Also, if we use equation (8), it is easy to see that
p
S (k, 0) ?
min
max |q(x)| = Tk (t(?)) =
{q?Rk [x]: q(1)=1} x?[0,?1 ]
2? k
,
1 + ? 2k
where ? is defined in (9). So, when kx0 ? x? k2 is close to zero, the regularized version of AMPE
tends to converge as fast as AMPE (see equation (10)) up to a small constant.
3
Numerical Experiments
We test our methods on a regularized logistic regression problem written
Pm
f (w) = i=1 log 1 + exp(?yi ?iT w) + ?2 kwk22 ,
where ? = [?1 , ..., ?m ]T ? Rm?n is the design matrix and y is a {?1, 1}n vector of labels. We used
the Madelon UCI dataset, setting ? = 102 (in order to have a ratio L/? equal to 109 ). We solve this
problem using several algorithms, the fixed-step gradient method for strongly convex functions [6,
Th. 2.1.15] using stepsize 2/(L + ?), where L = k?k22 /4 + ? and ? = ? , the accelerated gradient
method for strongly convex functions [6, Th. 2.2.3] and our nonlinear acceleration of the gradient
method iterates using RMPE in Proposition 2.6 with restarts.
This last algorithm is implemented as follows. We do k steps (in the numerical experiments, k is
? c?? where c?? is computed
typically equal to 5) of the gradient method, then extrapolate a solution X
?
?
?
by solving the RMPE problem (17) on the gradient iterates X, with regularization parameter ?
determined by a grid search. Then, this extrapolation becomes the new starting point of the gradient
method. We consider it as one iteration of RMPEk using k gradient oracle calls. We also analyze the
impact of an inexact line-search (described in Appendix A.7) performed after this procedure.
The results are reported in Figure 1. Using very few iterates, the solution computed using our estimate
(a nonlinear average of the gradient iterates) are markedly better than those produced by the Nesterovaccelerated method. This is only partially reflected by the theoretical bound from Proposition 2.6
which shows significant speedup in some regions but remains highly conservative (see Figure 3 in
section A.6). Also, Figure 2 shows us the impact of regularization. The AMPE process becomes
unstable because of the condition number of matrix M , which impacts the precision of the estimate.
4
Conclusion and Perspectives
In this paper, we developed a method which is able to accelerate, under some regularity conditions,
the convergence of a sequence {xi } without any knowledge of the algorithm which generates this
sequence. The regularization parameter present in the acceleration method can be computed easily
using some inexact line-search.
7
100
f (xk ) ? f (x? )
100
Gradient
Nesterov
Nest. + backtrack
RMPE 5
RMPE 5 + LS
10-5
0
2
Gradient
Nesterov
Nest. + backtrack
RMPE 5
RMPE 5 + LS
10-5
4
6
8
0
10
? 104
Gradient oracle calls
500
1000
1500
CPU Time (sec.)
Figure 1: Solving logistic regression on UCI Madelon dataset (500 features, 2000 data points)
using the gradient method, Nesterov?s accelerated method and RMPE with k = 5 (with and without
line search over the stepsize), with penalty parameter ? equal to 102 (Condition number is equal
to 1.2 ? 109 ). Here, we see that our algorithm has a similar behavior to the conjugate gradient:
unlike the Nesterov?s method, where we need to provide parameters ? and L, the RMPE algorithm
adapts himself in function of the spectrum of g(x? ) (so it can exploit the good local strong convexity
parameter), without any prior specification. We can, for example, observe this behavior when the
global strong convexity parameter is bad but not the local one.
2
f (xk ) ? f (x? )
10
0
10
Gradient
Nesterov
?2
10
RMPE 5
AMPE 5
0
2
4
6
Gradient oracle calls
8
10
4
x 10
Figure 2: Logistic regression on Madelon UCI Dataset, solved using Gradient method, Nesterov?s
method and AMPE (i.e. RMPE with ? = 0). The condition number is equal to 1.2 ? 109 . We see that
?T U
? )?1 k2 is huge (see Proposition 2.3).
without regularization, AMPE is unstable because k(U
The algorithm itself is simple. By solving only a small linear system we are able to compute a good
estimate of the limits of the sequence {xi }. Also, we showed (using the gradient method on logistic
regression) that the strategy which consists in alternating the algorithm and the extrapolation method
can lead to impressive results, improving significantly the rate of convergence.
Future work will consist in improving the performance of the algorithm by exploiting the structure of
the noise matrix E in some cases (for example, using the gradient method, the norm of the column
Ek in the matrix E is decreasing when k grows), extending the algorithm to the constrained case, the
stochastic case and to the non-symmetric case. We will also try to refine the term (16) present in the
theoretical bound.
Acknowledgment. The research leading to these results has received funding from the European
Union?s Seventh Framework Programme (FP7-PEOPLE-2013-ITN) under grant agreement no 607290
SpaRTaN, as well as support from ERC SIPA and the chaire ?conomie des nouvelles donn?es with
the data science joint research initiative with the fonds AXA pour la recherche.
8
References
[1] Y. Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2 ). Soviet
Mathematics Doklady, 27(2):372?376, 1983.
[2] AS Nemirovskii and Y. E Nesterov. Optimal methods of smooth convex minimization. USSR Computational
Mathematics and Mathematical Physics, 25(2):21?30, 1985.
[3] Me?ina, M. [1977], ?Convergence acceleration for the iterative solution of the equations x = ax + f ?,
Computer Methods in Applied Mechanics and Engineering 10(2), 165?173.
[4] Eddy, R. [1979], ?Extrapolating to the limit of a vector sequence?, Information linkage between applied
mathematics and industry pp. 387?396.
[5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[6] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2003.
[7] Y. Nesterov. Universal gradient methods for convex optimization problems. Mathematical Programming,
152(1-2):381?404, 2015.
[8] Yoel Drori and Marc Teboulle. Performance of first-order methods for smooth convex minimization: a
novel approach. Mathematical Programming, 145(1-2):451?482, 2014.
[9] Laurent Lessard, Benjamin Recht, and Andrew Packard. Analysis and design of optimization algorithms
via integral quadratic constraints. SIAM Journal on Optimization, 26(1):57?95, 2016.
[10] Weijie Su, Stephen Boyd, and Emmanuel Candes. A differential equation for modeling nesterov?s
accelerated gradient method: Theory and insights. In Advances in Neural Information Processing Systems,
pages 2510?2518, 2014.
[11] Andre Wibisono and Ashia C Wilson. On accelerated methods in optimization. Technical report, 2015.
[12] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In
Advances in Neural Information Processing Systems, pages 3366?3374, 2015.
[13] Alexander Craig Aitken. On Bernoulli?s numerical solution of algebraic equations. Proceedings of the
Royal Society of Edinburgh, 46:289?305, 1927.
[14] Daniel Shanks. Non-linear transformations of divergent and slowly convergent sequences. Journal of
Mathematics and Physics, 34(1):1?42, 1955.
[15] Peter Wynn. On a device for computing the em (sn ) transformation. Mathematical Tables and Other Aids
to Computation, 10(54):91?96, 1956.
[16] C Brezinski. Acc?l?ration de la convergence en analyse num?rique. Lecture notes in mathematics, (584),
1977.
[17] Avram Sidi, William F Ford, and David A Smith. Acceleration of convergence of vector sequences. SIAM
Journal on Numerical Analysis, 23(1):178?196, 1986.
[18] N Levinson. The Wiener RMS error criterion in filter design and prediction, appendix b of wiener, n.(1949).
Extrapolation, Interpolation, and Smoothing of Stationary Time Series, 1949.
[19] James Durbin. The fitting of time-series models. Revue de l?Institut International de Statistique, pages
233?244, 1960.
[20] Georg Heinig and Karla Rost. Fast algorithms for Toeplitz and Hankel matrices. Linear Algebra and its
Applications, 435(1):1?59, 2011.
[21] David A Smith, William F Ford, and Avram Sidi. Extrapolation methods for vector sequences. SIAM
review, 29(2):199?233, 1987.
[22] Stan Cabay and LW Jackson. A polynomial extrapolation method for finding limits and antilimits of vector
sequences. SIAM Journal on Numerical Analysis, 13(5):734?752, 1976.
[23] Gene H Golub and Richard S Varga. Chebyshev semi-iterative methods, successive overrelaxation iterative
methods, and second order richardson iterative methods. Numerische Mathematik, 3(1):157?168, 1961.
[24] Evgenij E Tyrtyshnikov. How bad are Hankel matrices? Numerische Mathematik, 67(2):261?269, 1994.
[25] Y. Nesterov. Squared functional systems and optimization problems. In High performance optimization,
pages 405?440. Springer, 2000.
[26] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on
Optimization, 11(3):796?817, 2001.
[27] P. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and
Optimization. PhD thesis, California Institute of Technology, 2000.
[28] A. Ben-Tal and A. Nemirovski. Lectures on modern convex optimization : analysis, algorithms, and
engineering applications. MPS-SIAM series on optimization. SIAM, 2001.
9
| 6267 |@word madelon:3 version:3 polynomial:31 norm:7 km:1 recursively:1 moment:1 reduction:1 initial:1 series:3 daniel:1 kx0:15 current:1 ka:6 comparing:1 must:1 written:3 stemming:1 numerical:9 zaid:1 extrapolating:1 interpretable:1 update:1 implying:1 stationary:1 device:1 xk:7 smith:2 recherche:1 ck2:4 num:1 iterates:19 coarse:1 successive:1 simpler:1 mathematical:4 constructed:1 differential:1 initiative:1 consists:1 introductory:1 fitting:1 introduce:1 x0:10 aitken:2 indeed:2 behavior:3 themselves:1 pour:1 sdp:1 mechanic:1 chaire:1 decreasing:2 cpu:1 solver:1 becomes:3 spain:1 estimating:2 moreover:6 provided:3 begin:1 lowest:1 what:1 argmin:4 developed:1 finding:1 transformation:2 nutshell:1 exactly:3 doklady:1 k2:54 rm:1 uk:1 control:6 grant:1 positive:1 engineering:2 local:2 tends:1 limit:6 consequence:1 ak:1 laurent:1 interpolation:1 approximately:2 inria:4 might:1 umr:3 studied:1 nemirovski:1 range:1 unique:3 acknowledgment:1 recursive:1 union:1 definite:2 revue:1 procedure:4 drori:1 universal:2 significantly:2 boyd:1 statistique:1 get:2 close:2 operator:2 context:1 projector:1 go:2 starting:3 l:2 convex:13 focused:2 survey:1 rectangular:1 simplicity:1 numerische:2 insight:1 jackson:1 stability:1 increment:1 updated:1 controlling:1 suppose:4 construction:1 programming:3 us:1 overdetermined:1 agreement:1 element:1 fly:2 solved:2 region:1 connected:1 highest:1 benjamin:1 pd:10 convexity:2 complexity:4 ui:4 ration:1 nesterov:14 solving:6 algebra:1 accelerate:2 easily:1 joint:1 soviet:1 fast:4 describe:1 kp:17 spartan:1 whose:3 supplementary:1 solve:7 toeplitz:2 richardson:1 transform:2 itself:1 analyse:1 ford:2 online:3 sequence:13 eigenvalue:6 maximal:1 product:1 fr:3 uci:3 loop:1 karla:1 adapts:1 intuitive:1 description:1 normalize:1 exploiting:1 convergence:19 regularity:3 optimum:6 extending:3 produce:2 ring:1 ben:1 tk:4 derive:1 linearize:1 damien:2 andrew:1 qt:1 received:1 solves:4 strong:4 implemented:1 come:1 filter:1 stochastic:1 material:1 require:1 decompose:1 proposition:9 hold:1 weijie:1 around:1 exp:1 mapping:1 driving:1 vary:1 early:1 smallest:1 a2:1 achieves:1 estimation:3 label:1 cole:3 minimization:3 always:1 normale:3 ck:6 shrinkage:1 broader:1 publication:1 wilson:1 corollary:3 derived:1 focus:2 ax:2 hongzhou:1 rank:2 bernoulli:1 sense:1 progressing:1 cnrs:1 typically:1 kc:3 sidi:2 france:3 tyrtyshnikov:1 classification:1 yoel:1 priori:1 ussr:1 constrained:1 smoothing:1 equal:7 never:1 having:1 future:1 others:1 report:1 simplify:1 richard:1 few:1 modern:1 beck:1 geometry:1 william:2 huge:3 highly:2 golub:1 semidefinite:1 behind:3 accurate:1 aspremon:1 integral:1 autocovariance:1 institut:1 euclidean:1 polynomial1:1 theoretical:2 minimal:14 fitted:1 column:2 industry:1 modeling:1 teboulle:2 kq:1 seventh:1 too:1 characterize:2 reported:1 perturbed:3 kxi:1 recht:1 cited:1 international:1 siam:8 physic:2 squared:1 thesis:1 choose:2 slowly:1 nest:2 ek:1 derivative:1 leading:2 potential:1 parrilo:1 scieur:2 de:4 sec:1 coefficient:9 explicitly:1 mp:1 multiplicative:2 view:1 extrapolation:16 try:2 performed:1 mpe:7 sup:3 francis:2 wynn:2 start:3 analyze:3 parallel:2 candes:1 contribution:1 minimize:3 square:1 wiener:2 generalize:1 produced:5 backtrack:2 craig:1 avram:2 acc:1 explain:2 reach:1 andre:1 definition:4 inexact:2 pp:1 james:1 proof:4 di:1 gain:1 proved:1 dataset:3 recall:3 lim:2 knowledge:1 organized:1 eddy:2 back:1 appears:1 alexandre:1 restarts:1 reflected:1 improved:2 formulation:5 strongly:5 furthermore:1 hand:2 ei:2 su:1 nonlinear:13 mode:1 logistic:4 quality:1 grows:3 k22:8 regularization:7 hence:2 read:1 symmetric:7 alternating:1 criterion:1 generalized:1 cabay:1 geometrical:1 meaning:1 novel:1 recently:1 funding:1 functional:1 conditioning:1 exponentially:1 extend:1 discussed:2 significant:2 ai:4 unconstrained:1 grid:1 pm:1 mathematics:5 erc:1 nonlinearity:1 replicating:1 access:1 specification:1 impressive:1 etc:1 base:1 add:1 recent:1 showed:1 perspective:1 rieure:3 axa:1 yi:1 conomie:1 greater:1 additional:1 converge:2 ud:1 itn:1 u0:10 levinson:2 semi:2 harchaoui:1 stephen:1 reduces:1 smooth:6 technical:1 match:2 bach:2 lin:1 bigger:1 controlled:1 impact:4 prediction:1 converging:2 regression:5 basic:1 variant:2 essentially:1 metric:1 parenthesis:1 himself:1 iteration:12 want:2 argminq:2 ode:1 interval:3 unlike:2 markedly:1 induced:2 subject:2 kwk22:1 call:5 unitary:1 near:1 easy:1 iterate:1 fit:2 idea:1 chebyshev:9 expression:1 rms:1 linkage:1 effort:2 penalty:2 peter:1 algebraic:1 hessian:1 remark:1 varga:1 detailed:1 reduced:2 kck2:1 stabilized:1 notice:1 write:2 georg:1 key:2 imaging:1 overrelaxation:1 fraction:1 sum:1 run:2 inverse:1 hankel:3 appendix:7 comparable:2 bound:19 ct:2 shank:3 convergent:1 quadratic:3 encountered:2 oracle:6 durbin:2 refine:2 bv:1 constraint:1 tal:1 generates:1 u1:2 speed:2 argument:1 min:10 optimality:1 speedup:1 structured:1 combination:1 conjugate:8 smaller:1 slightly:1 em:1 nouvelles:1 taken:1 equation:7 remains:1 mathematik:2 know:1 tractable:1 fp7:1 observe:3 generic:4 rost:1 stepsize:2 batch:2 robustness:1 original:4 k32:1 assumes:1 running:2 exploit:1 restrictive:1 sipa:1 build:1 especially:1 emmanuel:1 classical:5 society:1 objective:2 quantity:1 strategy:2 kak2:1 gradient:29 wrap:1 distance:1 link:3 subspace:1 outer:1 me:2 unstable:4 reason:1 assuming:1 kk:4 providing:3 minimizing:1 ratio:1 potentially:2 ashia:1 design:3 nemirovskii:1 rn:9 perturbation:6 arbitrary:2 david:2 namely:1 paris:3 required:1 california:1 barcelona:1 nip:1 able:3 krylov:3 below:1 ku0:5 program:2 max:14 packard:1 royal:1 regularized:9 kek2:4 minimax:1 scheme:6 technology:1 julien:1 stan:1 started:1 aspremont:1 coupled:1 sn:1 prior:1 review:1 asymptotic:1 ina:2 loss:1 fully:3 lecture:3 catalyst:1 semialgebraic:1 degree:10 thresholding:1 lessard:1 share:1 last:2 keeping:1 side:2 allow:2 understand:1 ik2:2 institute:1 edinburgh:1 computes:2 autoregressive:1 author:1 made:2 adaptive:1 kei:2 far:1 programme:1 approximate:6 gene:1 deg:2 global:2 mairal:1 assumed:1 xi:33 spectrum:2 continuous:1 iterative:8 search:5 why:1 donn:1 table:1 lasserre:1 nature:1 ku:12 improving:2 necessarily:1 european:1 marc:1 pk:8 main:1 linearly:1 noise:1 x1:3 en:2 kqk22:1 aid:1 precision:4 explicit:4 exponential:2 crude:1 lw:1 jacobian:2 third:1 rk:6 bad:2 showing:1 decay:1 admits:2 divergent:1 consist:1 ci:19 phd:1 magnitude:1 fonds:1 kx:5 nk:1 simply:1 univariate:1 forming:1 kxk:1 tracking:1 partially:1 scalar:3 springer:2 satisfies:1 acceleration:17 quantifying:1 lipschitz:1 monic:3 rmpe:18 determined:1 averaging:3 lemma:3 conservative:1 called:3 e:1 la:2 people:1 support:1 alexander:1 accelerated:6 wibisono:1 extrapolate:3 |
5,822 | 6,268 | MetaGrad: Multiple Learning Rates
in Online Learning
Tim van Erven
Leiden University
[email protected]
Wouter M. Koolen
Centrum Wiskunde & Informatica
[email protected]
Abstract
In online convex optimization it is well known that certain subclasses of objective
functions are much easier than arbitrary convex functions. We are interested in
designing adaptive methods that can automatically get fast rates in as many such
subclasses as possible, without any manual tuning. Previous adaptive methods
are able to interpolate between strongly convex and general convex functions. We
present a new method, MetaGrad, that adapts to a much broader class of functions,
including exp-concave and strongly convex functions, but also various types of
stochastic and non-stochastic functions without any curvature. For instance, MetaGrad can achieve logarithmic regret on the unregularized hinge loss, even though
it has no curvature, if the data come from a favourable probability distribution.
MetaGrad?s main feature is that it simultaneously considers multiple learning rates.
Unlike previous methods with provable regret guarantees, however, its learning
rates are not monotonically decreasing over time and are not tuned based on a
theoretically derived bound on the regret. Instead, they are weighted directly
proportional to their empirical performance on the data using a tilted exponential
weights master algorithm.
1
Introduction
Methods for online convex optimization (OCO) [28, 12] make it possible to optimize parameters
sequentially, by processing convex functions in a streaming fashion. This is important in time series
prediction where the data are inherently online; but it may also be convenient to process offline data
sets sequentially, for instance if the data do not all fit into memory at the same time or if parameters
need to be updated quickly when extra data become available.
The difficulty of an OCO task depends on the convex functions f1 , f2 , . . . , fT that need to be
optimized. The argument of these functions is a d-dimensional parameter vector w from a convex
domain U . Although this is abstracted away in the general framework, each function ft usually
measures the loss of the parameters on an underlying example (xt , yt ) in a machine learning task.
For example, in classification ft might be the hinge loss ft (w) = max{0, 1 yt hw, xt i} or the
logistic loss ft (w) = ln 1 + e yt hw,xt i , with yt 2 { 1, +1}. Thus the difficulty depends both on
the choice of loss and on the observed data.
There are different methods for OCO, depending on assumptions that can be made about the functions.
The simplest and most commonly used strategy is online gradient descent (GD), which does not
require any assumptions beyond convexity. GD updates parameters wt+1 = wt ?t rft (wt ) by
taking a step in the direction of the negative gradient, where
p the step size is determined by a parameter
?t called the learning rate. For learning rates ?t / 1/ t, GD guarantees that the regret over T
rounds, which measures the difference inpcumulative loss between the online iterates wt and the best
offline parameters u, is bounded by O( T ) [33]. Alternatively, if it is known beforehand that the
functions are of an easier type, then better regret rates are sometimes possible. For instance, if the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
functions are strongly convex, then logarithmic regret O(ln T ) can be achieved by GD with much
smaller learning rates ?t / 1/t [14], and, if they are exp-concave, then logarithmic regret O(d ln T )
can be achieved by the Online Newton Step (ONS) algorithm [14].
This partitions OCO tasks into categories, leaving it to the user to choose the appropriate algorithm
for their setting. Such a strict partition, apart from being a burden on the user, depends on an extensive
cataloguing of all types of easier functions that might occur in practice. (See Section 3 for several
ways in which the existing list of easy functions can be extended.) It also immediately raises the
question of whether there are cases in between logarithmic and square-root regret (there are, see
Theorem 3 in Section 3), and which algorithm to use then. And, third, it presents the problem that
the appropriate algorithm might depend on (the distribution of) the data (again see Section 3), which
makes it entirely impossible to select the right algorithm beforehand.
p
These issues motivate the development of adaptive methods, which are no worse than O( T ) for
general convex functions, but also automatically take advantage of easier functions whenever possible.
An important step in this direction are the adaptive GD algorithm of Bartlett, Hazan, and Rakhlin
[2] and its proximal improvement by Do, Le, and Foo [8], which are able to interpolate between
strongly convex and general convex functions if they are provided with a data-dependent strong
convexity parameter in each round, and significantly outperform the main non-adaptive method
(i.e. Pegasos, [29]) in the experiments of Do et al. Here we consider a significantly richer class of
functions, which includes exp-concave functions, strongly convex functions, general convex functions
that do not change between rounds (even if they have no curvature), and stochastic functions whose
gradients satisfy the so-called Bernstein condition, which is well-known to enable fast rates in offline
statistical learning [1, 10, 19]. The latter group can again include functions without curvature, like
the unregularized hinge loss. All these cases are covered simultaneously by a new adaptive method
we call MetaGrad, for multiple eta gradient algorithm. MetaGrad maintains a covariance matrix of
size d ? d where d is the parameter dimension. In the remainder of the paper we call this version full
MetaGrad. A reference implementation is available from [17]. We also design and analyze a faster
approximation that only maintains the d diagonal elements, called diagonal MetaGrad. Theorem 7
below implies the following:
PT
2
u
|
Theorem 1. Let gt = rft (wt ) and
p VT = t=1 ((u wt ) gt ) . Then the regret of full MetaGrad
is simultaneously bounded by O( T ln ln T ), and by
T
X
t=1
f (wt )
T
X
t=1
ft (u) ?
T
X
t=1
(wt
u)| gt ? O
?p
VTu d ln T + d ln T
?
for any u 2 U. (1)
Theorem 1 bounds the regret in terms of a measure of variance VTu that depends on the distance of
the algorithm?s choices wt to the optimum u, and which, in favourable cases, may be significantly
smaller than T . Intuitively, this happens, for instance, when there is stable optimum u that the
algorithm?s choices wt convergepto. Formal consequences are given in Section 3, which shows that
this bound implies faster than O( T ) regret rates, often logarithmic in T , for all functions in the rich
class mentioned above. In all cases the dependence on T in the rates matches what we would expect
based on related work in the literature, and in most cases the dependence on the dimension d is also
what we would expect. Only for strongly convex functions is there an extra factor d. It is an open
question whether this is a fundamental obstacle for which an even more general adaptive method is
needed, or whether it is an artefact of our analysis.
The main difficulty in achieving the regret guarantee
from Theorem 1 is tuning a learning rate
p
parameter ?. In theory, ? should be roughly 1/ VTu , but this is not possible using any existing
techniques, because the optimum u is unknown in advance, and tuning in terms of a uniform upper
bound maxu VTu ruins all desired benefits. MetaGrad therefore runs multiple slave algorithms, each
with a different learning rate, and combines them with a novel master algorithm that learns the
empirically best learning rate for the OCO task in hand. The slaves are instances of exponential
weights on the continuous parameters u with a suitable surrogate loss function, which in particular
causes the exponential weights distributions to be multivariate Gaussians. For the full version of
MetaGrad, the slaves are closely related to the ONS algorithm on the original losses, where each
slave receives the master?s gradients instead of its own. It is shown that d 12 log2 T e + 1 slaves suffice,
which is at most 16 as long as T ? 109 , and therefore seems computationally acceptable. If not, then
the number of slaves can be further reduced at the cost of slightly worse constants in the bound.
2
Protocol 1: Online Convex Optimization from First-order Information
Input: Convex set U
1: for t = 1, 2, . . . do
2:
Learner plays wt 2 U
3:
Environment reveals convex loss function ft : U ! R
4:
Learner incurs loss ft (wt ) and observes (sub)gradient gt = rft (wt )
5: end for
Related Work If we disregard computational efficiency, then the result of Theorem 1 can be
achieved by finely discretizing the domain U and running the Squint algorithm for prediction with
experts with each discretization point as an expert [16]. MetaGrad may therefore also be seen as a
computationally efficient extension of Squint to the OCO setting.
Our focus in this work is on adapting to sequences of functions ft that are easier than general convex
functions. A different direction in which faster rates are possible is by adapting to the domain U . As
we assume U to be fixed, we consider an upper bound D on the norm of the optimum u to be known.
In contrast, Orabona and P?l [24, 25] design methods that can adapt to the norm of u. One may also
look at the shape of U . As can be seen in the analysis of the slaves, MetaGrad is based a spherical
Gaussian prior on Rd , which favours u with small `2 -norm. This is appropriate for U that are similar
to the Euclidean ball, but less so if U is more like a box (`1 -ball). In this case, it would be better
to run a copy of MetaGrad for each dimension separately, similarly to how the diagonal version of
the AdaGrad algorithm [9, 21] may be interpreted as running a separate copy of GD with a separate
learning rate for each dimension. AdaGrad further uses an adaptive tuning of the learning rates that is
able to take advantage of sparse gradient vectors, as can happen on data with rarely observed features.
We briefly compare to AdaGrad in some very simple simulations in Appendix A.1.
Another notion of adaptivity is explored in a series of work [13, 6, 31] obtaining tighter bounds
for linear functions ft that vary little between rounds (as measured either by their deviation from
the mean function or by successive differences).
pSuch bounds imply super fast rates for optimizing
a fixed linear function, but reduce to slow O( T ) rates in the other cases of easy functions that
we consider. Finally, the way MetaGrad?s slaves maintain a Gaussian distribution on parameters u
is similar in spirit to AROW and related confidence weighted methods, as analyzed by Crammer,
Kulesza, and Dredze [7] in the mistake bound model.
Outline We start with the main definitions in the next section. Then Section 3 contains an extensive
set of examples where Theorem 1 leads to fast rates, Section 4 presents the MetaGrad algorithm,
and Section 5 provides the analysis leading to Theorem 7, which is a more detailed statement of
Theorem 1 with an improved dependence on the dimension in some particular cases and with exact
constants. The details of the proofs can be found in the appendix.
2
Setup
Let U ? Rd be a closed convex set, which we assume contains the origin 0 (if not, it can always
be translated). We consider algorithms for Online Convex Optimization over U , which operate
according to the protocol displayed in Protocol 1. Let wt 2 U be the iterate produced by the
algorithm in round t, let ft : U ! R be the convex loss function produced by the environment and let
gt = rft (wt ) be the (sub)gradient, which is the feedback given to the algorithm.1 We abbreviate the
PT
regret with respect to u 2 U as RTu = t=1 (ft (wt ) ft (u)), and define our measure of variance as
P
PT Pd
T
2
2
VTu = t=1 ((u wt )| gt ) for the full version of MetaGrad and VTu = t=1 i=1 (ui wt,i )2 gt,i
|
for the diagonal version. By convexity of ft , we always have ft (wt ) ft (u) ? (wt u) gt . Defining
? u = PT (wt u)| gt , this implies the first inequality in Theorem 1: Ru ? R
? u . A stronger
R
T
T
T
t=1
requirement than convexity is that a function f is exp-concave, which (for exp-concavity parameter
1) means that e f is concave. Finally, we impose the following standard boundedness assumptions,
distinguishing between the full version of MetaGrad (left column) and the diagonal version (right
1
If ft is not differentiable at wt , any choice of subgradient gt 2 @ft (wt ) is allowed.
3
column): for all u, v 2 U, all dimensions i and all times t,
full
ku
vk ? D
full
diag
|ui
full
vi | ? Ddiag
(2)
diag
kgt k ? G
|gt,i | ? G .
Here, and throughout the paper, the norm of a vector (e.g. kgt k) will always refer to the `2 -norm.
For the full version of MetaGrad, the Cauchy-Schwarz inequality further implies that (u v)| gt ?
ku vk ? kgt k ? Dfull Gfull .
3
Fast Rate Examples
In this section, we motivate our interest in the adaptive bound (1) by giving a series of examples in
which it provides fast rates. These fast rates are all derived from two general sufficient conditions:
one based on the directional derivative of the functions ft and one for stochastic gradients that satisfy
the Bernstein condition, which is the standard condition for fast rates in off-line statistical learning.
Simple simulations that illustrate the conditions are provided in Appendix A.1 and proofs are also
postponed to Appendix A.
Directional Derivative Condition In order to control the regret with respect to some point u, the
first condition requires a quadratic lower bound on the curvature of the functions ft in the direction
of u:
Theorem 2. Suppose, for a given u 2 U, there exist constants a, b > 0 such that the functions ft all
satisfy
ft (u) ft (w) + a(u w)| rft (w) + b ((u w)| rft (w))
for all w 2 U.
(3)
Then any method with regret bound (1) incurs logarithmic regret, RTu = O(d ln T ), with respect to u.
2
The case a = 1 of this condition was introduced by Hazan, Agarwal, and Kale [14], who show that
it is satisfied for all u 2 U by exp-concave and strongly convex functions. The rate O(d ln T ) is
also what we would expect by summing the asymptotic offline rate obtained by ridge regression on
the squared loss [30, Section 5.2], which is exp-concave. Our extension to a > 1 is technically a
minor step, but it makes the condition much more liberal, because it may then also be satisfied by
functions that do not have any curvature. For example, suppose that ft = f is a fixed convex function
that does not change with t. Then, when u? = arg minu f (u) is the offline minimizer, we have
(u? w)| rf (w) 2 [ Gfull Dfull , 0], so that
1
2
f (u? ) f (w) (u? w)| rf (w) 2(u? w)| rf (w) + full full ((u? w)| rf (w)) ,
D G
where the first inequality uses only convexity of f . Thus condition (3) is satisfied by any fixed convex
function, even if it does not have any curvature at all, with a = 2 and b = 1/(Gfull Dfull ).
Bernstein Stochastic Gradients The possibility of getting fast rates even without any curvature
is intriguing, because it goes beyond the usual strong convexity or exp-concavity conditions. In
the online setting, the case of fixed functions ft = f seems rather restricted, however, and may in
fact be handled by offline optimization methods. We therefore seek to loosen this requirement by
replacing it by a stochastic condition on the distribution of the functions ft . The relation between
variance bounds like Theorem 1 and fast rates in the stochastic setting is studied in depth by Koolen,
Gr?nwald, and Van Erven [19], who obtain fast rate results both in expectation and in probability.
Here we provide a direct proof only for the expected regret, which allows a simplified analysis.
Suppose the functions ft are independent and identically distributed (i.i.d.), with common distribution
P. Then we say that the gradients satisfy the (B, )-Bernstein condition with respect to the stochastic
optimum u? = arg minu2U Ef ?P [f (u)] if
(w
u? )| E [rf (w)rf (w)| ] (w
f
u? ) ? B (w
u? )| E [rf (w)]
f
for all w 2 U. (4)
This is an instance of the well-known Bernstein condition from offline statistical learning [1, 10],
applied to the linearized excess loss (w u? )| rf (w). As shown in Appendix H, imposing the
condition for the linearized excess loss is a weaker requirement than imposing it for the original
excess loss f (w) f (u? ).
4
Algorithm 1: MetaGrad Master
1
Input: Grid of learning rates 5DG
?1 ?2 . . . with prior weights ?1?1 , ?1?2 , . . .
1: for t = 1, 2, . . . do
2:
Get prediction
wt? 2 U of slave (Algorithm 2) for each ?
P
?t? ?wt?
?
? ?t ?
3:
Play wt =
4:
Observe gradient gt = rft (wt )
5:
P?
?
Update ?t+1
=
6: end for
. Tilted Exponentially Weighted Average
2U
?
. As in (8)
?
?t? e ?`t (wt )
?
?
P
?
?`t (wt )
? ?t e
for all ?
. Exponential Weights with surrogate loss (6)
Theorem 3. If the gradients satisfy the (B, )-Bernstein condition for B > 0 and 2 (0, 1] with
respect to u? = arg?minu2U Ef ?P [f (u)], then any method with
? regret bound (1) incurs expected
?
regret E[RTu ] = O (Bd ln T )
1/(2
)
T (1
)/(2
)
+ d ln T .
For = 1, the rate becomes
O(d ln T ), just like for fixed functions, and for smaller it is in between
p
logarithmic and O( dT ). For instance, the hinge loss on the unit ball with i.i.d. data satisfies the
Bernstein condition with = 1, which implies an O(d ln T ) rate. (See Appendix A.4.) It is common
to add `2 -regularization to the hinge loss to make it strongly convex, but this example shows that that
is not necessary to get logarithmic regret.
4
MetaGrad Algorithm
In this section we explain the two versions (full and diagonal) of the MetaGrad algorithm. We will
make use of the following definitions:
full
Mtfull
full
?
:=
diag
gt gt|
Mtdiag
diag
:= 1
?
2
2
:= diag(gt,1
, . . . , gt,d
)
(5)
:= 1/d.
Depending on context, wt 2 U will refer to the full or diagonal MetaGrad prediction in round t. In
the remainder we will drop the superscript from the letters above, which will always be clear from
context.
MetaGrad will be defined by means of the following surrogate loss `?t (u), which depends on a
parameter ? > 0 that trades off regret compared to u with the square of the scaled directional
derivative towards u (full case) or its approximation (diag case):
`?t (u) :=
?(wt
u)| gt + ? 2 (u
wt )| Mt (u
wt ).
(6)
Our surrogate loss consists of a linear and a quadratic part. Using the language of Orabona, Crammer,
and Cesa-Bianchi [26], the data-dependent quadratic part causes a ?time-varying regularizer? and
Duchi, Hazan, and Singer [9] would call it ?temporal adaptation of the proximal function?. The sum
of quadratic terms in our surrogate is what appears in the regret bound of Theorem 1.
The MetaGrad algorithm is a two-level hierarchical construction, displayed as Algorithms 1 (master
algorithm that learns the learning rate) and 2 (sub-module, a copy running for each learning rate ?
from a finite grid). Based on our analysis in the next section, we recommend using the grid in (8).
Master The task of the Master Algorithm 1 is to learn the empirically best learning rate ? (parameter
of the surrogate loss `?t ), which is notoriously difficult to track online because the regret is nonmonotonic over rounds and may have multiple local minima as a function of ? (see [18] for a study
in the expert setting). The standard technique is therefore to derive a monotonic upper bound on
the regret and tune the learning rate optimally for the bound. In contrast, our approach, inspired
by the approach for combinatorial games of Koolen and Van Erven [16, Section 4], is to have our
master aggregate the predictions of a discrete grid of learning rates. Although we provide a formal
analysis of the regret, the master algorithm does not depend on the outcome of this analysis, so any
5
Algorithm 2: MetaGrad Slave
1
Input: Learning rate 0 < ? ? 5DG
, domain size D > 0
?
?
2
1: w1 = 0 and ?1 = D I
2: for t = 1, 2, . . . do
3:
Issue wt? to master (Algorithm 1)
4:
Observe gradient gt = rft (wt )
?
? 1
Pt
5:
Update ??t+1 = D12 I + 2? 2 s=1 Ms
?
et+1
w
= wt?
?
wt+1
6: end for
??t+1 ?gt + 2? 2 Mt (wt?
??
t+1
= ?U
?
et+1
w
. Gradient at master point wt
wt )
with projection ??
U (w) = arg min(u
w)| ?
1
(u
w)
u2U
Implementation: For Mt = Mtdiag only maintain diagonal of ??t . For Mt = Mtfull use rank-one
| ?
2? 2 ??
?
t g t g t ?t
et+1
update ??t+1 = ??t
and simplify w
= wt? ???t+1 gt (1 + 2?gt| (wt? wt )).
1+2? 2 g | ?? gt
t
t
slack in our bounds does not feed back into the algorithm. The master is in fact very similar to
the well-known exponential weights method (line 5), run on the surrogate losses, except that in the
predictions the weights of the slaves are tilted by their learning rates (line 3), having the effect of
giving a larger weight to larger ?. The internal parameter ? is set to ?full from (5) for the full version
of the algorithm, and to ?diag for the diagonal version.
Slaves The role of the Slave Algorithm 2 is to guarantee small surrogate regret for a fixed learning
rate ?. We consider two versions, corresponding to whether we take rank-one or diagonal matrices
Mt (see (5)) in the surrogate (6). The first version maintains a full d ? d covariance matrix and has
the best regret bound. The second version uses only diagonal matrices (with d non-zero entries),
thus trading off a weaker bound with a better run-time in high dimensions. Algorithm 2 presents
the update equations in a computationally efficient form. Their intuitive motivation is given in the
proof of Lemma 5, where we show that the standard exponential weights method with Gaussian prior
and surrogate losses `?t (u) yields Gaussian posterior with mean wt? and covariance matrix ??t . The
full version of MetaGrad is closely related to the Online Newton Step algorithm [14] running on the
original losses ft : the differences are that each Slave receives the Master?s gradients gt = rft (wt )
instead of its own rft (wt? ), and that an additional term 2? 2 Mt (wt? wt ) in line 5 adjusts for the
difference between the Slave?s parameters wt? and the Master?s parameters wt . MetaGrad is therefore
a bona fide first-order algorithm that only accesses ft through gt . We also note that we have chosen
the Mirror Descent version that iteratively updates and projects (see line 5). One might alternatively
consider the Lazy Projection version (as in [34, 23, 32]) that forgets past projections when updating
on new data. Since projections are typically computationally expensive, we have opted for the Mirror
Descent version, which we expect to project less often, since a projected point seems less likely to
update to a point outside of the domain than an unprojected point.
Total run time As mentioned, the running time is dominated by the slaves. Ignoring the projection,
a slave with full covariance matrix takes O(d2 ) time to update, while slaves with diagonal covariance
matrix take O(d) time. If there are m slaves, this makes the overall computational effort respectively
O(md2 ) and O(md), both in time per round and in memory. Our analysis below indicates that
m = 1 + d 12 log2 T e slaves suffice, so m ? 16 as long as T ? 109 . In addition, each slave may
incur the cost of a projection, which depends on the shape of the domain U . To get a sense for the
projection cost we consider a typical example. For the Euclidean ball a diagonal projection can be
performed using a few iterations of Newton?s method to get the desired precision. Each such iteration
costs O(d) time. This is generally considered affordable. For full projections the story is starkly
different. We typically reduce to the diagonal case by a basis transformation, which takes O(d3 ) to
compute using SVD. Hence here the projection dwarfs the other run time by an order of magnitude.
We refer to [9] for examples of how to compute projections for various domains U . Finally, we
remark that a potential speed-up is possible by running the slaves in parallel.
6
5
Analysis
We conduct the analysis in three parts. We first discuss the master, then the slaves and finally their
composition. The idea is the following. The master guarantees for all ? simultaneously that
0 =
T
X
t=1
`?t (wt ) ?
T
X
`?t (wt? ) + master regret compared to ?-slave.
(7a)
t=1
Then each ?-slave takes care of learning u, with regret O(d ln T ):
T
X
t=1
`?t (wt? )
?
These two statements combine to
T
X
?
(wt u)| gt
T
X
`?t (u) + ?-slave regret compared to u.
? 2 VTu =
t=1
and the overall result follows by optimizing ?.
5.1
(7b)
t=1
T
X
t=1
`?t (u) ? sum of regrets above
(7c)
Master
To show that
we can aggregate the slave predictions, we consider the potential T :=
P ? ? PT `? (w? )
?
?
t . In Appendix B, we bound the last factor e ?`T (wT ) above by its tangent at
t=1 t
?
e
? 1
wT? = wT and obtain an objective that can be shown to be equal to T 1 regardless of the gradient
gT if wT is chosen according to the Master algorithm. It follows that the potential is non-increasing:
Lemma 4 (Master combines slaves). The Master Algorithm guarantees 1 = 0
...
1
T.
P
T
?
?
?
1
As 0 ? ?1 ln T ?
t=1 `t (wt ) + ? ln ?1 , this implements step (7a) of our overall proof
?
1
strategy, with master regret ? ln ?1 . We further remark that we may view our potential function T
as a game-theoretic supermartingale in the sense of Chernov, Kalnishkan, Zhdanov, and Vovk [5],
and this lemma as establishing that the MetaGrad Master is the corresponding defensive forecasting
strategy.
5.2
Slaves
Next we implement step (7b), which requires proving an O(d ln T ) regret bound in terms of the
surrogate loss for each MetaGrad slave. In the full case, the surrogate loss is jointly exp-concave, and
in light of the analysis of ONS by Hazan, Agarwal, and Kale [14] such a result is not surprising. For
the diagonal case, the surrogate loss lacks joint exp-concavity, but we can use exp-concavity in each
direction separately, and verify that the projections that tie the dimensions together do not cause any
trouble. In Appendix C we analyze both cases simultaneously, and obtain the following bound on the
regret:
1
Lemma 5 (Surrogate regret bound). For 0 < ? ? 5DG
, let `?t (u) be the surrogate losses as defined
in (6) (either the full or the diagonal version). Then the regret of Slave Algorithm 2 is bounded by
!
T
T
T
X
X
X
1
1
2
?
?
?
2 2
`t (wt ) ?
`t (u) +
kuk + ln det I + 2? D
Mt
for all u 2 U.
2D2
2
t=1
t=1
t=1
5.3
Composition
To complete the analysis of MetaGrad, we first put the regret bounds for the master and slaves together
as in (7c). We then discuss how to choose the grid of ?s, and optimize ? over this grid to get our main
result. Proofs are postponed to Appendix D.
Theorem 6 (Grid point regret). The full and diagonal versions of MetaGrad, with corresponding
definitions from (2) and (5), guarantee that, for any grid point ? with prior weight ?1? ,
?
?
PT
2
?
1
1
1
2 2
kuk
ln
?
+
ln
det
I
+
2?
D
M
2
t
1
t=1
?
2
? Tu ? ?VTu + 2D
R
for all u 2 U.
?
7
Grid We now specify the grid points and corresponding prior. Theorem 6 above implies that any
two ? that are within a constant factor of each other will guarantee the same bound up to essentially
the same constant factor. We therefore choose an exponentially spaced grid with a heavy tailed prior
(see Appendix E):
?i :=
2 i
5DG
and
?1?i :=
C
(i + 1)(i + 2)
for i = 0, 1, 2, . . . , d 12 log2 T e,
(8)
with normalization C = 1 + 1 (1 + d 12 log2 T e) . At the cost of a worse constant factor in the
bounds, the number of slaves can be reduced by using a larger spacing factor, or by omitting some
2
of the smallest learning rates. The net effect of (8) is that, for any ? 2 [ 5DG1 pT , 5DG
] there is an
?i
1
?i 2 [ 2 ?, ?], for which ln ?1 ? 2 ln(i + 2) = O(ln ln(1/?i )) = O(ln ln(1/?)). As these costs
are independent of T , our regret guarantees still hold if the grid (8) is instantiated with T replaced by
any upper bound.
The final step is to apply Theorem 6 to this grid, and to properly select the learning rate ?i in the
bound. This leads to our main result:
PT
PT
u
2
Theorem 7 (MetaGrad Regret Bound). Let ST = t=1 Mt and VT,i
= t=1 (ui wt,i )2 gt,i
.
Then the regret of MetaGrad, with corresponding definitions from (2) and (5) and with grid and prior
as in (8), is bounded by
s
?
?
?
?
1
1
1
1
2
2
u
u
?
RT ?
8VT
kuk + ?T + CT + 5DG
kuk + ?T + CT
for all u 2 U,
D2
?
D2
?
where
(
?
D2 rk(ST )
?T ? min ln det I +
ST
VTu
?
, rk(ST ) ln
for the full version of the algorithm,
?T =
d
X
ln
D2
i=1
PT
2
t=1 gt,i
u
VT,i
!
T
D2 X
kgt k2
VTu t=1
!)
= O(d ln(D2 G2 T ))
= O(d ln(D2 G2 T ))
for the diagonal version, and CT = 4 ln 3 + 12 log2 T = O(ln ln T ) in both cases. Moreover, for
both versions of the algorithm, the regret is simultaneously bounded by
v
!?
u
?
?
?
T
u
X
1
1
1
1
2
2
u
t
2
2
?
RT ? 8D
kgt k
kuk + CT + 5DG
kuk + CT
for all u 2 U.
D2
?
D2
?
t=1
These two bounds together show that the full version of MetaGrad achieves the new adaptive guarantee
of Theorem 1. The diagonal version behaves like running the full version separately per dimension,
but with a single shared learning rate.
6
Discussion and Future Work
One may consider extending MetaGrad in various directions. In particular it would be interesting
to speed up the method in high dimensions, for instance by sketching [20]. A broader question is
to identify and be adaptive to more types of easy functions that are of practical interest. One may
suspect there to be a price (in regret overhead and in computation) for broader adaptivity, but based
on our results for MetaGrad it does not seem like we are already approaching the point where this
price is no longer worth paying.
Acknowledgments We would like to thank Haipeng Luo and the anonymous reviewers (in particular Reviewer 6) for valuable comments. Koolen acknowledges support by the Netherlands
Organization for Scientific Research (NWO, Veni grant 639.021.439).
8
References
[1] P. L. Bartlett and S. Mendelson. Empirical minimization. Probability Theory and Related Fields, 135(3):
311?334, 2006.
[2] P. L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. In NIPS 20, pages 65?72, 2007.
[3] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert
advice. Machine Learning, 66(2/3):321?352, 2007.
[4] A. V. Chernov and V. Vovk. Prediction with advice of unknown number of experts. In Proc. of the 26th
Conf. on Uncertainty in Artificial Intelligence, pages 117?125, 2010.
[5] A. V. Chernov, Y. Kalnishkan, F. Zhdanov, and V. Vovk. Supermartingales in prediction with expert advice.
Theoretical Computer Science, 411(29-30):2647?2669, 2010.
[6] C.-K. Chiang, T. Yang, C.-J. Le, M. Mahdavi, C.-J. Lu, R. Jin, and S. Zhu. Online optimization with
gradual variations. In Proc. of the 25th Annual Conf. on Learning Theory (COLT), pages 6.1?6.20, 2012.
[7] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. In NIPS 22, pages
414?422, 2009.
[8] C. B. Do, Q. V. Le, and C.-S. Foo. Proximal regularization for online and batch learning. In Proc. of the
26th Annual International Conf. on Machine Learning (ICML), pages 257?264, 2009.
[9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[10] T. van Erven, P. D. Gr?nwald, N. A. Mehta, M. D. Reid, and R. C. Williamson. Fast rates in statistical and
online learning. Journal of Machine Learning Research, 16:1793?1861, 2015.
[11] P. Gaillard, G. Stoltz, and T. van Erven. A second-order bound with excess losses. In Proc. of the 27th
Annual Conf. on Learning Theory (COLT), pages 176?196, 2014.
[12] E. Hazan. Introduction to online optimization. Draft, April 10, 2016, ocobook.cs.princeton.edu, 2016.
[13] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine
learning, 80(2-3):165?188, 2010.
[14] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine
Learning, 69(2-3):169?192, 2007.
[15] S. Ihara. Information Theory for Continuous Systems. World Scientific, 1993.
[16] W. M. Koolen and T. van Erven. Second-order quantile methods for experts and combinatorial games. In
Proc. of the 28th Annual Conf. on Learning Theory (COLT), pages 1155?1175, 2015.
[17] W. M. Koolen and T. van Erven. MetaGrad open source code. bitbucket.org/wmkoolen/metagrad, 2016.
[18] W. M. Koolen, T. van Erven, and P. D. Gr?nwald. Learning the learning rate for prediction with expert
advice. In NIPS 27, pages 2294?2302, 2014.
[19] W. M. Koolen, P. D. Gr?nwald, and T. van Erven. Combining adversarial guarantees and stochastic fast
rates in online learning. In NIPS 29, 2016.
[20] H. Luo, A. Agarwal, N. Cesa-Bianchi, and J. Langford. Efficient second order online learning by sketching.
In NIPS 29, 2016.
[21] H. B. McMahan and M. J. Streeter. Adaptive bound optimization for online convex optimization. In Proc.
of the 23rd Annual Conf. on Learning Theory (COLT), pages 244?256, 2010.
[22] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean. Efficient estimation of word representations in vector
space. International Conf. on Learning Representations, 2013. Arxiv.org/abs/1301.3781.
[23] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):
221?259, 2009.
[24] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In
NIPS 27, pages 1116?1124, 2014.
[25] F. Orabona and D. P?l. Coin betting and parameter-free online learning. In NIPS 29, 2016.
[26] F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications to
classification and regression. Machine Learning, 99(3):411?435, 2015.
[27] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85?117, 2015.
[28] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2012.
[29] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
SVM. Mathematical Programming, 127(1):3?30, 2011.
[30] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In NIPS 23, pages
2199?2207, 2010.
[31] J. Steinhardt and P. Liang. Adaptivity and optimism: An improved exponentiated gradient algorithm. In
Proc. of the 31th Annual International Conf. on Machine Learning (ICML), pages 1593?1601, 2014.
[32] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of
Machine Learning Research, 11:2543?2596, 2010.
[33] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proc. of the
20th Annual International Conf. on Machine Learning (ICML), pages 928?936, 2003.
[34] M. Zinkevich. Theoretical Guarantees for Algorithms in Multi-Agent Settings. PhD thesis, Carnegie
Mellon University, 2004.
9
| 6268 |@word briefly:1 version:26 stronger:1 seems:3 norm:5 open:2 mehta:1 d2:11 simulation:2 seek:1 linearized:2 covariance:5 gradual:1 incurs:3 boundedness:1 series:3 contains:2 tuned:1 erven:9 existing:2 past:1 discretization:1 surprising:1 luo:2 intriguing:1 bd:1 tilted:3 partition:2 happen:1 shape:2 drop:1 update:8 intelligence:1 chiang:1 iterates:1 provides:2 draft:1 successive:1 liberal:1 org:2 mathematical:2 direct:1 become:1 consists:1 combine:3 overhead:1 starkly:1 bitbucket:1 theoretically:1 expected:2 roughly:1 multi:1 inspired:1 decreasing:1 spherical:1 automatically:2 little:1 solver:1 increasing:1 becomes:1 provided:2 spain:1 underlying:1 bounded:6 suffice:2 project:2 moreover:1 what:4 interpreted:1 transformation:1 guarantee:12 temporal:1 certainty:1 subclass:2 concave:8 tie:1 scaled:1 k2:1 control:1 unit:1 grant:1 reid:1 local:1 mistake:1 consequence:1 establishing:1 might:4 studied:1 practical:1 acknowledgment:1 practice:1 regret:45 implement:2 empirical:2 significantly:3 adapting:2 convenient:1 projection:12 confidence:1 word:1 get:6 pegasos:2 selection:1 put:1 context:2 impossible:1 optimize:2 zinkevich:2 dean:1 reviewer:2 yt:4 kale:4 go:1 regardless:1 convex:32 d12:1 defensive:1 immediately:1 adjusts:1 proving:1 notion:1 variation:2 updated:1 pt:11 play:2 suppose:3 user:2 exact:1 construction:1 programming:3 us:3 designing:1 distinguishing:1 origin:1 element:1 trend:1 expensive:1 centrum:1 updating:1 observed:2 ft:29 module:1 role:1 trade:1 observes:1 valuable:1 mentioned:2 environment:2 convexity:6 pd:1 ui:3 nesterov:1 motivate:2 raise:1 depend:2 incur:1 technically:1 f2:1 learner:2 efficiency:1 basis:1 translated:1 joint:1 various:3 regularizer:1 instantiated:1 fast:14 artificial:1 aggregate:2 nonmonotonic:1 outcome:1 outside:1 shalev:2 whose:1 richer:1 larger:3 say:1 jointly:1 superscript:1 online:28 final:1 advantage:2 sequence:1 differentiable:1 net:1 remainder:2 adaptation:1 tu:1 combining:1 achieve:1 adapts:1 intuitive:1 haipeng:1 getting:1 optimum:5 requirement:3 extending:1 tim:2 depending:2 illustrate:1 derive:1 measured:1 minor:1 paying:1 strong:2 c:1 come:1 implies:6 trading:1 direction:6 artefact:1 closely:2 kgt:5 stochastic:12 supermartingales:1 enable:1 require:1 f1:1 anonymous:1 tighter:1 extension:2 hold:1 considered:1 ruin:1 exp:11 maxu:1 minu:1 vary:1 achieves:1 smallest:1 estimation:1 proc:8 combinatorial:2 nwo:1 schwarz:1 gaillard:1 weighted:3 cotter:1 minimization:1 rtu:3 gaussian:4 always:4 super:1 rather:1 varying:1 broader:3 cwi:1 derived:2 focus:1 u2u:1 improvement:1 vk:2 rank:2 indicates:1 properly:1 contrast:2 opted:1 adversarial:1 sense:2 dependent:2 streaming:1 typically:2 minu2u:2 relation:1 interested:1 issue:2 classification:2 arg:4 overall:3 colt:4 dual:2 development:1 rft:10 equal:1 field:1 having:1 look:1 icml:3 oco:6 future:1 recommend:1 simplify:1 few:1 dg:7 simultaneously:6 interpolate:2 replaced:1 maintain:2 ab:1 vtu:10 organization:1 interest:2 wouter:1 possibility:1 analyzed:1 nl:2 light:1 primal:2 beforehand:2 necessary:1 stoltz:2 conduct:1 euclidean:2 desired:2 theoretical:2 instance:8 column:2 eta:1 obstacle:1 cost:7 deviation:1 entry:1 uniform:1 gr:4 optimally:1 proximal:3 gd:6 st:4 fundamental:1 international:4 off:3 together:3 quickly:1 sketching:2 w1:1 squared:1 thesis:1 cesa:4 satisfied:3 again:2 choose:3 worse:3 conf:9 expert:8 derivative:3 leading:1 mahdavi:1 potential:4 zhdanov:2 includes:1 satisfy:5 depends:6 vi:1 performed:1 root:1 view:1 closed:1 hazan:9 analyze:2 start:1 maintains:3 parallel:1 square:2 variance:3 who:2 yield:1 spaced:1 identify:1 directional:3 produced:2 lu:1 notoriously:1 worth:1 metagrad:39 explain:1 simultaneous:1 manual:1 whenever:1 definition:4 infinitesimal:1 proof:6 back:1 appears:1 feed:1 dt:1 specify:1 improved:3 april:1 though:1 strongly:8 box:1 just:1 langford:1 hand:1 receives:2 replacing:1 lack:1 logistic:1 scientific:2 dredze:2 effect:2 omitting:1 verify:1 regularization:3 hence:1 iteratively:1 round:8 game:3 supermartingale:1 m:1 generalized:2 outline:1 ridge:1 theoretic:1 complete:1 duchi:2 novel:1 ef:2 common:2 koolen:8 behaves:1 mt:8 empirically:2 overview:1 exponentially:2 wmkoolen:2 refer:3 composition:2 mellon:1 imposing:2 smoothness:1 tuning:4 rd:3 grid:14 similarly:1 language:1 stable:1 access:1 longer:1 gt:29 add:1 curvature:8 multivariate:1 own:2 posterior:1 optimizing:2 apart:1 schmidhuber:1 certain:1 inequality:3 discretizing:1 vt:4 postponed:2 seen:2 minimum:1 additional:1 care:1 impose:1 timvanerven:1 dg1:1 monotonically:1 corrado:1 nwald:4 multiple:5 full:28 chernov:3 faster:3 match:1 adapt:1 long:2 loosen:1 prediction:11 regression:2 essentially:1 expectation:1 affordable:1 iteration:2 sometimes:1 normalization:1 arxiv:1 agarwal:4 achieved:3 addition:1 separately:3 spacing:1 leaving:1 source:1 extra:2 operate:1 unlike:1 finely:1 strict:1 comment:1 suspect:1 ascent:1 spirit:1 unprojected:1 seem:1 call:3 extracting:1 sridharan:1 yang:1 bernstein:7 easy:3 identically:1 iterate:1 fit:1 approaching:1 reduce:2 idea:1 det:3 favour:1 whether:4 handled:1 optimism:1 bartlett:3 effort:1 wiskunde:1 forecasting:1 cause:3 remark:2 deep:1 generally:1 tewari:1 covered:1 detailed:1 clear:1 tune:1 netherlands:1 kalnishkan:2 informatica:1 simplest:1 category:1 reduced:2 outperform:1 exist:1 estimated:1 track:1 per:2 discrete:1 carnegie:1 group:1 veni:1 achieving:1 d3:1 kuk:6 subgradient:3 sum:2 run:6 letter:1 master:24 uncertainty:2 throughout:1 acceptable:1 appendix:10 fide:1 entirely:1 bound:34 ct:5 quadratic:4 annual:7 occur:1 md2:1 dominated:1 speed:2 argument:1 min:2 mikolov:1 betting:1 according:2 ball:4 smaller:3 slightly:1 happens:1 intuitively:1 restricted:1 unregularized:2 ln:35 computationally:4 equation:1 slack:1 discus:2 needed:1 singer:3 end:3 available:2 gaussians:1 apply:1 observe:2 hierarchical:1 away:1 appropriate:3 batch:1 coin:1 original:3 running:7 include:1 trouble:1 log2:5 hinge:5 newton:3 giving:2 quantile:1 objective:2 question:3 dwarf:1 already:1 strategy:3 dependence:3 usual:1 diagonal:19 surrogate:15 md:1 rt:2 gradient:21 distance:1 separate:2 thank:1 considers:1 cauchy:1 provable:1 ru:1 code:1 liang:1 setup:1 difficult:1 statement:2 negative:1 bona:1 implementation:2 design:2 squint:2 unknown:2 bianchi:4 upper:4 finite:1 descent:5 jin:1 displayed:2 defining:1 extended:1 mansour:1 arbitrary:1 introduced:1 ihara:1 extensive:2 optimized:1 barcelona:1 nip:9 able:3 beyond:2 usually:1 below:2 kulesza:2 rf:8 including:1 memory:2 max:1 suitable:1 difficulty:3 regularized:1 abbreviate:1 zhu:1 imply:1 acknowledges:1 prior:7 literature:1 tangent:1 adagrad:3 asymptotic:1 loss:30 expect:4 adaptivity:3 interesting:1 proportional:1 srebro:2 leiden:1 foundation:1 agent:1 sufficient:1 xiao:1 story:1 heavy:1 last:1 copy:3 free:2 offline:7 formal:2 weaker:2 exponentiated:1 taking:1 sparse:1 van:9 benefit:1 feedback:1 dimension:10 depth:1 distributed:1 world:1 rich:1 concavity:4 made:1 adaptive:15 commonly:1 simplified:1 projected:1 excess:4 ons:3 abstracted:1 sequentially:2 reveals:1 summing:1 shwartz:2 alternatively:2 continuous:2 tailed:1 streeter:1 ku:2 learn:1 inherently:1 ignoring:1 obtaining:1 williamson:1 domain:7 protocol:3 diag:7 main:6 motivation:1 noise:1 allowed:1 advice:4 fashion:1 slow:1 foo:2 precision:1 sub:4 slave:33 exponential:6 mcmahan:1 forgets:1 third:1 learns:2 hw:2 theorem:19 rk:2 xt:3 favourable:2 list:1 rakhlin:2 explored:1 svm:1 burden:1 mendelson:1 mirror:3 phd:1 magnitude:1 chen:1 easier:5 logarithmic:9 likely:1 steinhardt:1 lazy:1 arow:1 g2:2 monotonic:1 minimizer:1 satisfies:1 towards:1 orabona:5 shared:1 price:2 change:2 determined:1 except:1 typical:1 vovk:3 wt:61 averaging:1 lemma:4 called:3 total:1 svd:1 disregard:1 rarely:1 select:2 internal:1 support:1 latter:1 crammer:4 princeton:1 |
5,823 | 6,269 | Graphical Time Warping for Joint Alignment of
Multiple Curves
Yizhi Wang
Virginia Tech
[email protected]
David J. Miller
Pennsylvania State University
[email protected]
Yue Wang
Virginia Tech
[email protected]
Kira Poskanzer
University of California, San Francisco
[email protected]
Lin Tian
University of California, Davis
[email protected]
Guoqiang Yu
Virginia Tech
[email protected]
Abstract
Dynamic time warping (DTW) is a fundamental technique in time series analysis
for comparing one curve to another using a flexible time-warping function. However, it was designed to compare a single pair of curves. In many applications,
such as in metabolomics and image series analysis, alignment is simultaneously
needed for multiple pairs. Because the underlying warping functions are often
related, independent application of DTW to each pair is a sub-optimal solution.
Yet, it is largely unknown how to efficiently conduct a joint alignment with all
warping functions simultaneously considered, since any given warping function
is constrained by the others and dynamic programming cannot be applied. In
this paper, we show that the joint alignment problem can be transformed into a
network flow problem and thus can be exactly and efficiently solved by the max
flow algorithm, with a guarantee of global optimality. We name the proposed
approach graphical time warping (GTW), emphasizing the graphical nature of
the solution and that the dependency structure of the warping functions can be
represented by a graph. Modifications of DTW, such as windowing and weighting,
are readily derivable within GTW. We also discuss optimal tuning of parameters
and hyperparameters in GTW. We illustrate the power of GTW using both synthetic
data and a real case study of an astrocyte calcium movie.
1
Introduction
Time series, such as neural recordings, economic observations and biological imaging movies, are
ubiquitous, containing rich information about the temporal patterns of physical quantities under
certain conditions. Comparison of time series lies at the heart of many scientific questions. Due to
the time distortions, direct comparison of time series using e.g. Euclidean distance is problematic.
Dynamic time warping (DTW) is a powerful and popular technique for time series comparison
using flexible warping functions. DTW has been successful for various tasks, including querying,
classification, and clustering [1, 2, 3]. Although DTW is a mature approach, significant improvements
have been proposed over the years, such as derivative DTW [4], weighted DTW [5], curve pairs with
multiple dimensions [6], and extensions for large scale data mining [7].
However, DTW and all its variants consider the alignment of a single pair of time series, while in
many applications we encounter the task of aligning multiple pairs simultaneously. One might apply
DTW or its variants to each pair separately. However, very often, this is suboptimal because it ignores
the dependency structure between the multiple warping functions. For example, when analyzing time
lapse imaging data [8], we can consider the data as a collection of time series indexed by pixel. One
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: (a) Each node is a warping path between two curves xn and yn . Neighboring paths are
assumed to be similar (A and B) while non-neighboring ones may be quite different (A and C). (b)
DTW can be represented as a shortest path problem in a directed graph. Each edge originating from
node (k1 , k2 ) has a weight given by the dissimilarity (e.g. Euclidean distance) between xn (k1 ) and
yn (k2 ). The path distance between the purple and green paths is defined as the area of the shaded
parts. (c) Primal and dual graphs. The purple and gold edges are two infinite capacity reverse edges
for the dual and primal graphs, respectively. Only two such edges are drawn for clarity. The dashed
line shows the auxiliary edges used for transforming the primal graph to the dual graph, which are
removed afterwards. (d) Flow chart for GTW. The corresponding figure for each step is annotated.
potential task is to compute the warping function associated with every pixel with respect to a given
reference time series, with the ultimate goal of identifying signal propagation patterns among pixels.
Although different pixels may have different warping functions, we expect that the functions are
more similar between adjacent pixels than between distant pixels. That is, we expect a certain degree
of smoothness among spatially adjacent warping functions. Another example is profile alignment
for liquid chromatography-mass spectrometry (LC-MS) data, which is used to measure expression
levels of small biomolecules such as proteins and metabolites. Each profile can be considered as a
time series indexed by the retention time [9]. Typically, all profiles in the data set must be aligned to
a reference profile. Because the LC-MS data measures similar things against a common reference
profile, we expect similar warping functions for all profiles.
To the best of our knowledge, there is no existing approach that fundamentally generalizes DTW
to jointly model multiple warping functions and align multiple curves, while retaining these advantageous properties of DTW: (1) computational efficiency and (2) a guarantee of global optimality.
As we will discuss below, most existing efforts reuse DTW multiple times in a heuristic way. Interestingly, the necessity for and the challenge of a joint and integrated modeling approach come
precisely from the two factors that contribute to the wide use of DTW. On one hand, the power of
DTW is due to its flexibility in allowing a broad range of warping functions. As is well known in
machine learning, an unavoidable consequence of flexibility is the problem of overfitting [10], and
hence the estimated warping function is often unreliable. This problem becomes severe when the
observed time series are very noisy and this is often the case, rather than the exception, for multiple
curve alignment. On the other hand, the solution to DTW is extremely efficient and global optimality
(with respect to the DTW objective function) is guaranteed, through the application of dynamic
programming [11]. Unfortunately, when we consider joint modeling of multiple warping functions,
dynamic programming is no longer applicable due to interactions between the different warping
functions.
The computational burden of such a joint modeling seems prohibitive, and the feasibility of obtaining
the global optimum is far from obvious, because each warping function is coupled to all the rest,
either directly or indirectly. Thus, we were fortuitous to find that the joint modeling can be solved
very efficiently, with global optimality ensured.
In this paper, we develop Graphical Time Warping (GTW) to jointly model multiple time warping
functions in a unified framework. Given a set of warping function {Pn , n = 1, . . . , N } to be
2
optimized, a generic form of GTW can be expressed as follows:
min
{Pn ,n=1,...,N }
N
X
X
DT W _cost(Pn ) + ?
n=1
dissimilarity_cost(Pm , Pn ),
(1)
E(m,n)?Gstruct
where Pn is subject to the same constraints as in conventional DTW such as boundary conditions,
continuity, and monotonicity [12]. Gstruct is a graph encoding the dependency structure among
the warping functions. Each node in the graph represents one warping function, indexed by n,
and E(m, n) ? Gstruct denotes that there is an edge between nodes m and n in Gstruct , whose
corresponding warping functions are expected to be similar, as encoded in the second term of the
cost (1). DT W _cost is the conventional DTW path finding cost and dissimilarity_cost ensures
the neighboring warping functions are similar. The graph Gstruct can be defined by users or induced
from other sources, which provides great flexibility for encoding various types of problems. For
example, to analyze time series imaging data, the graph can be induced by the pixel grid so that
edges exist only between spatially neighboring pixels. Alternatively, when aligning multiple LC-MS
profiles, the graph is fully connected, such that each profile has an edge with all other profiles.
Since a warping function is a path in a two-dimensional grid from a given source to a given sink
(as in Fig.1b), we propose to use the area bounded by two paths as the dissimilarity cost between
them. Later, we will show how the optimization problem in Equation (1) equipped with this specific
dissimilarity cost can be transformed into a network flow problem and solved by the max flow
algorithm [13, 14].
As previously discussed, most DTW improvements have focused on the alignment of a single pair
of curves. There are some heuristic efforts that deal with alignment of multiple curves. Chudova
jointly performed clustering and time warping using a mixture model [15]; this assumes curves from
the same cluster are generated by a single model. This is a suboptimal, restrictive ?surrogate? for
capturing the relationships between curves, and does not capture relationships as (user-)specified
by a graph. Tsai et al. applied an MCMC strategy to align multiple LC-MS profiles with a single
prior distribution imposed on all warping functions [9], but the approach is time-consuming and
no finite-time convergence to the global optimum is guaranteed. Similarly, algorithms for aligning
multiple DNA sequences are based on first clustering the sequences and then progressively aligning
them [16, 17]. Most critically, all existing approaches assume special dependency structures, e.g. all
nodes (curves) are equally dependent, and do not promise a globally optimal solution, while GTW
works with any given dependency structure and finds the globally optimal solution efficiently.
Interestingly, the max flow algorithm has long been suggested as an alternative to DTW [13] by
researchers in the network flow community. As an example, Uchida extended DTW to the nonMarkovian case and solved it by the max flow algorithm [18]. Max flow formulations have also been
developed to solve image segmentation [14], stereo matching [19] and shape matching problems
[20]. But researchers in the time series analysis community have paid little attention to the max
flow approach, perhaps because dynamic programming is much more efficient than the max flow
algorithm and is sufficient for conventional DTW problems.
2
Problem Formulation
The task is to jointly align N pairs of curves (xn , yn ), 1 ? n ? N . For the sake of clarity, but
without loss of generality, we assume all curves have the same length K and each curve is indexed by
an integer from 1 to N . To rigorously formulate the problem, we have the following definitions.
Definition 1 ? valid warping function. A valid warping function for the nth pair of curves is
a set of integer pairs Pn = {(kn,x , kn,y )} such that the following conditions are satisfied: (a)
boundary conditions: (1, 1) ? Pn and (K, K) ? Pn ; (b) continuity and monotonicity conditions: if
(kn,x , kn,y ) ? Pn , then (kn,x ? 1, kn,y ) ? Pn or (kn,x , kn,y ? 1) ? Pn or (kn,x ? 1, kn,y ? 1) ? Pn .
Definition 2 ? alignment cost. For any given valid warping function Pn and its corresponding pair
of curves (xn , yn ), the associated alignment cost is defined as follows:
X
cost(Pn ) =
g(xn [k1 ] ? yn [k2 ])),
(2)
(k1 ,k2 )?Pn
where g(?) is any nonnegative function.
3
Figure 2: (a) GTW graph for two neighboring pairs. Only two (bidirectional) edges (green) are drawn
for clarity. The orange background represents the (single pair) primal graphs. The blue foreground
represents the dual graphs. (b) A neighborhood structure used for simulation. In the center is a 10
by 10 grid for 100 pairs, with e.g. a close spatial neighborhood defined around each grid point. The
warping paths for the three blue squares are shown. The short red and green lines indicate when time
shifts occur. They are at different positions along the three paths. The warping paths for spatially
close pairs should be similar.
Definition 3 ? neighboring warping functions. Suppose the dependency structure for a set of N
valid warping functions is given by the graph Gstruct = {Vs , Es }, where Vs is the set of nodes,
with each node corresponding to a warping function, and Es is the set of undirected edges between
nodes. If there is an edge between the mth and nth nodes, we call Pm and Pn neighbors, denoted by
(m, n) ? N eib.
Definition 4 ? distance between two valid warping functions. We define the distance between two
valid warping functions dist(Pm , Pn ) as the area of the region bounded by the two paths as shown in
Fig.1b.
When we jointly align multiple pairs of curves, our goal is to minimize both the overall alignment
cost and the distance between neighboring warping functions. Mathematically, denoting VP the set
of valid warping function and ?1 the hyperparameter, we want to solve the following optimization
problem:
minf (P ) =
P
3
min
P ={Pn ?VP |1?n?N }
N
X
cost(Pn ) + ?1
n=1
X
dist(Pm , Pn )
(3)
(m,n)?N eib
Methods
In this section, we first construct a graph based on Equation (3); then we prove that Equation (3) can
be solved via the well-known max flow problem in the graph; finally we provide a practical algorithm.
3.1
Graph construction
Definition 5 ? directed planar graph for a single pair of curves. For each pair of curves, consistent
with the cost function (2), there is an induced directed planar graph [21], Gn := {Vn , En }, 1 ? n ?
N , where Vn and En are the nodes and directed edges, respectively. An example is shown in Fig.1b.
Definition 6 ? dual graph. Define G0n := {Vn0 , En0 } as the dual graph of the directed planar graph
Gn , where nodes Vn0 are all faces of Gn , and for each e ? En , we have a new edge e0 ? En0
connecting the faces from the right side of e to the left side. This edge is directed (with positive
direction by convention). The edge weights are the same as for the primal graph Gn . An example is
shown in Fig.1c.
In contrast to conventional dual graph theory, one critical innovation here is that besides the positive
edge we add in one more edge with reverse direction in the dual graph corresponding to each edge in
4
the primal graph. The weight for the reversed edge is set to infinity. This design is critical: otherwise,
as demonstrated in Fig.3c, we cannot get an equivalent simpler problem.
Definition 7 ? GTW graph. The GTW graph Ggtw := {Vgtw , Egtw } is defined as the integrated
graph of all dual graphs {G0n |1 ? n ? N } with the integration guided by the neighborhood
of warping functions, such that Vgtw = {Vn0 |1 ? n ? N } and Egtw = {En0 |1 ? n ? N } ?
0
0
0
0
{(Vm,i
, Vn,i
)|(m, n) ? N eib}. All newly introduced edges (Vm,i
, Vn,i
) are bi-directional with
capacity ?2 (whereas all other edges have capacity proportional to the distance between two curves,
measured at a pair of time points, i.e. g(xn (k1 ) ? yn (k2 ). An example is shown in Fig.2a.
3.2
Equivalent problem
We claim that the GTW problem as stated in Equation (3) is equivalent to the minimum cut problem
on the GTW graph Ggtw if we set ?2 = 2?1 .
3.3
Proof of equivalence
For brevity, more proofs of lemmas can be found in the supplementary material.
Definition 8 ? labeling of graph. L is a labeling of graph G if it assigns each node in G a binary label.
L can induce a cut set
PC = {(i, j)|L(i) 6= L(j), (i, j) ? EG }.The corresponding cut (or flow) is
cut(L) = cut(C) = (i,j)?C weight(i, j), where weight(i, j) is the weight on the edge between
nodes i and j.
Based on its construction, a labeling L for the graph Ggtw can be written as L = {Ln |1 ? n ? N },
where Ln is a labeling for the dual graph G0n . So we can express the minimum cut problem for the
graph Ggtw as:
min g(L) =
L
min
L:={Ln |1?n?N }
N
X
cut(Ln ) + ?2
n=1
X
cut(Lm , Ln ),
(4)
(m,n)?N eib
where cut(Ln ) is the cut of all edges for G0n and cut(Lm , Ln ) is the number of the cut edges between
two neighboring dual graphs G0m and G0n .
Denote Lmf as the labeling induced by applying the max flow algorithm on Ggtw , where for
each node v, Lmf (v) = 0 if distres (v, s) < ? and Lmf (v) = 1 if distres (v, s) = ?, where
distres (i, j) is the distance between nodes i and j on the residual graph Gext,res given by the
maximum flow algorithm and s and t are the source and sink nodes of Ggtw , respectively. Denote
S = {v|Lmf (v) = 0} and T = {v|Lmf (v) = 1}. We further denote Lmf,n as the component
corresponding to G0n . Similarly, Sn and Tn are subsets of S and T , respectively. Obviously, by
the max-flow min-cut theorem, the resulting cut set Cmf has the smallest cut. Cmf,n is the cut set
restricted to the graph G0n .
Lemma 1 Given labeling Lmf,n ? Lmf , Sn forms a single connected area within graph G0n . That
is, ?i ? Sn , there is a path with nodes {i, j, k, . . . , s} ? Sn from i to s. Similarly, Tn also forms a
single connected area. In other words, after applying the max flow algorithm on Ggtw , members of
group Sn do not completely surround members of group Tn , or vice versa.
Definition 9 ? directed cut set. Cut set C is a directed cut set if ?(i, j) ? C, either i ? S and j ? T
or cap(i, j) = ?, i ? T and j ? S. As will be seen, this definition ensures that the cut set includes
only the edges e0 that correspond to edges in the primal graph Gn , instead of the reverse edges
introduced when building the dual graph G0n , which give the wrong path direction.
Lemma 2 Cmf,n is a directed cut set.
From Lemma 1 and 2, we can build the link between the first term of f (Equation (3)) and g (Equation
(4)).
Lemma 3 For any directed cut set Cn , 1 ? n ? N for Ggtw , there is a valid warping function
Pn , 1 ? n ? N for Gn , 1 ? n ? N so that cut(Cn ) = cost(Pn ), and vice versa.
Lemma 4 For two neighboring pairs (xm , ym ) and (xn , yn ), dist(Pn , Pm ) = 0.5|Cm,n |, where we
0
0
0
0
0
0
denote Cm,n := {(Vm,i
, Vn,i
)|Vm,i
? S, Vn,i
? T or Vm,i
? T, Vn,i
? S}.
5
Lemma 4 states that the distance between two paths in the primal graph (Fig.1b) is the same as the
number of neighborhood cuts between those two pairs, up to a constant scaling factor.
Lemma 5 Let P be a set of valid warping functions for {Gn |1 ? n ? N } and let L be the labeling
in Ggtw that corresponds to directed cuts. If ?2 = 2?1 , given P , we can find a corresponding L with
f (P ) = g(L) and given L, we can find a corresponding P so that g(L) = f (P ).
Proof. First we show each P gives an L. As each path Pn can be transformed to a directed cut Cn
(Lemma 3), which by definition is also a cut, it gives a valid labeling Ln and cost(Pn ) = cut(Ln )
by definition. And dist(Pm , Pn ) = 0.5 ? cut(Lm , Ln ) by Lemma 4. Then, with ?2 = 2?1 , we
find L = {Ln |1 ? n ? N } such that f (P ) = g(L). Conversely, given L = {Ln |1 ? n ? N }
corresponding to directed cuts Cn , Cn can be transformed back to a valid path Pn with the same cost
(Lemma 3). For the cut between Lm and Ln , we still have cut(Lm , Ln ) = 2 ? dist(Pm , Pn ) using
Lemma 4. Thus we find a set P = {Pn |1 ? n ? N } with the same cost as L.
Theorem 1 If Lmf is a labeling induced by the maximum flow algorithm on Ggtw , then the corresponding P minimizes f (P ).
Proof. Assume the max flow algorithm gives us a labeling L, which corresponds to path P and by
Lemma 5 the relationship f (P ) = g(L) holds. Here f is the primal cost function and g is the dual
cost function. Assume we have another labeling L0 6= L and it corresponds to another path P 0 ; then
also by Lemma 5 f (P 0 ) = g(L0 ) holds. Suppose path P 0 is better than path P , i.e. f (P 0 ) < f (P ).
This implies g(L0 ) < g(L), which contradicts the assumption that L is the labeling from the max
flow algorithm. Thus, there is no better path in terms of f () than that associated with the result of the
max flow algorithm.
From Theorem 1 we know that after the max flow algorithm and labeling finishes, we can get a
single path Pn for each pair (xn , yn ), which solves the primal form optimization problem. Since the
labeling sometimes may not be unique, different labelings may have the same cut. Correspondingly,
different paths in the primal graph may have the same (jointly minimum) cost.
Corollary 1 If ?1 = ?2 = 0, L that minimizes g(L) corresponds to the P = {Pn |1 ? n ? N }
where Pn is the solution of the single pair DTW problem for (xn , yn ).
3.4
Flowchart of GTW algorithm
Once the equivalence is established, a practical algorithm is readily available, as shown in the
flowchart of Fig.1d. Assuming the hyperparameter (?1 ) is fixed, one first constructs a primal graph
separately for each alignment task, then converts each primal graph to its dual form, and finally adds
in edges to the set of dual graphs to obtain the GTW graph. Once we get the GTW graph, we can
apply any maximum flow algorithm to the graph, leading to the minimum cut set Cmf . For each sub
cut-set Cmf,n corresponding to the nth dual graph G0n , we convert the cut edges back to edges in
the primal graph Gn . The resulting edges will be connected as a warping path and hence lead to a
warping function. The set of resultant warping functions are the solution to our GTW problem. A
working example is given in the Supplementary.
Note also that, as indicated in Fig.1d, this algorithm can be iteratively applied, with parameter (and
hyperparameter) re-estimation performed at each iteration. The primary parameter is the noise
variance (which can easily be generalized to a separate noise variance parameter for each pair of
curves, when appropriate). In addition to the major hyperparameter ?1 in Equation (3), we may use
other hyperparameters to incorporate prior knowledge such as favoring a diagonal warping direction,
which actually results in an extension of DTW even for a single pair of curves. In the Supplement,
we show that the hyperparameters can be tuned, along with parameters, via either cross validation
or approximately consistent with maximum likelihood estimation. In addition, as a heuristic rule of
thumb, we can choose ?1 = a? 2 , where ? 2 is the noise variance and a ? (1, 10).
4
Experimental results
We used synthetic and real data to compare the performance of GTW and DTW. For the synthetic
data, we evaluate the performance by the estimation error for the warping path Pn . For real data, we
examine the spatial delay pattern relative to a reference curve. We also illustrate the impact of the
capacity of the reverse edges. More experiments can be found in the Supplement.
6
Figure 3: (a) The curves before (blue, xn ) and after (red, yn ) warping in the simulation. The green
dashed squares indicate where the warping occurs. (b) Performance comparison of GTW and DTW
for 100 simulations under different additive noise variances. Both cases include the off-diagonal
weights ? (see section 4.1). Error bars indicate standard deviation. (c) The impact of reverse capacity.
Left: a pair of curves from an astrocyte imaging movie. Only times 81 to 100 are shown. The right
three figures are the warping paths with different reverse capacities. P os_cap is the capacity for
corresponding edges from the primal graph. Red dashed circles indicate where the DTW constraints
are violated. (d) Estimated propagation patterns on the astrocyte image. Left: original movie from
times 6 to 8. The yellow dot is the position of the reference curve. Middle: the delay pattern of pixels
relative to the reference curve, estimated by GTW. Right: results for DTW.
4.1
Experiment on synthetic data
We generated N = 100 pairs of curves (xn , yn ). Each pair is linked by a warping function Wn so
that yn = Wn (xn ). Curve xn is a time series composed of low pass filtered Gaussian noise and yn is
generated by applying Wn on xn (Fig.3a). Noise is also added to both xn and yn . In this simulation
the pairs are in a 10 ? 10 four connected grid; thus the ground-truth warping paths for neighboring
pairs are similar (Fig.2b). The warping path of the pair at location (1, 1) has a one time-point shift
from 21 to 30 and another one from 71 to 80. The pair at location (10, 10) has a one time point shift
from 30 to 39 and another from 62 to 71. The warping function for pairs between these locations are
smoothly interpolated.
We ran the simulation 100 times and added uncorrelated Gaussian noises to xn and yn . All hyperparameters were initialized to 0; the noise variance was initialized to 0.01. In?addition, the distance
of the path from the diagonal line was penalized via a hyperparameter ? = d/? 2 , where d is the
distance of a point in the path to the diagonal. When the parameter and hyperparameter changes were
all less than 0.001, we stopped the algorithm. Convergence usually occurred within 10 iterations. The
estimated path was compared with the ground truth one and we define the normalized error as
errnorm =
K?1
N
XX
1
SP?n (k, k + 1) ? SPn (k, k + 1)
(K ? 1)N
n=1
(5)
k=1
Here SP?n (k, k + 1) is the area under the path P?n between times k and k + 1.
GTW improves the accuracy in estimating warping functions. As shown in Fig.3b, GTW outperforms DTW even when the noise level is small or moderate. Moreover, while DTW degrades with
increasing noise, GTW maintains a much smaller change in its normalized error for increasing noise.
7
Infinite capacity reverse edges are critical. In Fig.3c we illustrate the importance of introducing
infinite capacity reverse edges when we construct the dual graph G0n for each primal graph Gn . This
ensures the cut found by the maximum flow algorithm is a directed cut, which is linked to a path
in the primal graph that satisfies the constraints of DTW. If the reverse edge is not added, the max
flow algorithm acts as if there is a reverse edge with zero weight. Alternatively, we can add in a
reverse edge with the same weight as for the positive direction. However, in both cases as shown in
the right two subplots of Fig.3c, DTW?s monotonicity and continuity constraints are violated almost
everywhere, since what we obtain by max flow in this case is no longer a directed cut and the path in
the primal graph is no longer a valid warping function.
4.2
Application to time-lapse astrocyte calcium imaging data
We applied GTW to estimate the propagation patterns of astrocyte calcium fluorescent imaging data
[22, 8]. The movie was obtained from a neuro-astrocyte co-cultured Down syndrome cell line. It
contains 100 time points and rich types of propagation are observed during the time course. Here we
focused on a selected region. The movie between time instants 6 and 8 is shown in the left column of
Fig.3d. At time 6, the activity occurs at the center part and it spreads out over the subsequent time
points. At time 8, the active area is the largest. Since the movie was taken while the cells were under
drug treatment conditions, the properties of these calcium waves are important features of interest.
Here we focused on one segmented area and identified the propagation pattern within it. We extracted
the curve for one pixel as the reference curve x (Fig.3c, left) and all other pixels are yn . So now
x1 = x2 = ? ? ? = xN = x, which is a special case of GTW. All parameters and hyperparameters
were initialized in the same way as previously and both methods included an off-diagonal cost ?.
From the estimated warping path, we extracted the delay relative to the reference curve, which is
defined as the largest discrepancy from the diagonal line at a given time point (Fig.3d, middle and
right columns). GTW gives cleaner patterns of delay compared to DTW, which produces noisier
results.
5
Discussion
While GTW can be applied to time series data analysis tasks like classification and clustering to
obtain a smoothed distance measure, it could be even more powerful for mining the relationships
between warping functions. Their differences could be classified or clustered, and explained by other
features (or factors) for those curve pairs. This may bring further insights and interpretability to the
solution. As a two-layer network for time series, GTW is a general framework for analyzing the
pattern of warping functions. First, the time series can be flexibly organized into pairs with DTW
constraints. One curve can participate in multiple pairings and even play different roles (either as
a reference or as a test curve). Partial matching, direction preference and weighting of DTW can
be readily incorporated. In addition, GTW allows the test curve and the reference curve to have
different lengths. Second, the construction of graphs from pairs adds another layer of flexibility.
For spatio-temporal data or video analysis, physical locations or pixels naturally guide the choice
of graph edges. Otherwise, we can avoid using a fully connected graph by utilizing any auxiliary
information on each pair of curves to build the graph. For example, features related to each subject
(e.g., clinical features) can be used to enforce a sparse graph structure.
6
Conclusion
In this paper, we developed graphical time warping (GTW) to impose a flexible dependency structure
among warping functions to jointly align multiple pairs of curves. After formulating the original
cost function, the single pair time warping term is transformed into its dual form and pairwise costs
are added. We proved the equivalence of this dual form and the primal form by the properties of
the dual-directed graph as well as the specific structure of the primal single pair shortest path graph.
Windowing, partial matching, direction, and off-diagonal costs can also be incorporated in the model,
which makes GTW flexible for various applications of time warping. Iterative unsupervised parameter
estimation and inference by max flow are shown to be effective and efficient in our experiments.
Simulation results and a case study of astrocyte propagation demonstrate the effectiveness of our
approach.
8
References
[1] D. J. Berndt and J. Clifford, ?Using Dynamic Time Warping to Find Patterns in Time Series,? in Proceedings
of the 3rd International Conference on Knowledge Discovery and Data Mining, AAAIWS?94, (Seattle,
WA), pp. 359?370, AAAI Press, 1994.
[2] T.-c. Fu, ?A review on time series data mining,? Engineering Applications of Artificial Intelligence, vol. 24,
pp. 164?181, Feb. 2011.
[3] T. Warren Liao, ?Clustering of time series data?a survey,? Pattern Recognition, vol. 38, pp. 1857?1874,
Nov. 2005.
[4] E. Keogh and M. Pazzani, ?Derivative Dynamic Time Warping,? in Proceedings of the 2001 SIAM
International Conference on Data Mining, Proceedings, pp. 1?11, Society for Industrial and Applied
Mathematics, Apr. 2001.
[5] Y.-S. Jeong, M. K. Jeong, and O. A. Omitaomu, ?Weighted dynamic time warping for time series
classification,? Pattern Recognition, vol. 44, pp. 2231?2240, Sept. 2011.
[6] M. Shokoohi-Yekta, J. Wang, and E. Keogh, ?On the Non-Trivial Generalization of Dynamic Time Warping
to the Multi-Dimensional Case,? in Proceedings of the 2015 SIAM International Conference on Data
Mining, pp. 289?297, Society for Industrial and Applied Mathematics, June 2015.
[7] T. Rakthanmanon, B. Campana, A. Mueen, G. Batista, B. Westover, Q. Zhu, J. Zakaria, and E. Keogh,
?Searching and mining trillions of time series subsequences under dynamic time warping,? in Proceedings
of the 18th ACM SIGKDD, pp. 262?270, ACM, 2012.
[8] A. Volterra, N. Liaudet, and I. Savtchouk, ?Astrocyte Ca2+ signalling: an unexpected complexity,? Nat Rev
Neurosci, vol. 15, pp. 327?335, May 2014.
[9] T.-H. Tsai, M. G. Tadesse, C. Di Poto, L. K. Pannell, Y. Mechref, Y. Wang, and H. W. Ressom, ?Multiprofile Bayesian alignment model for LC-MS data analysis with integration of internal standards,? Bioinformatics, vol. 29, pp. 2774?2780, Nov. 2013.
[10] J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning, vol. 1. Springer series in
statistics Springer, Berlin, 2001.
[11] P. F. Felzenszwalb and R. Zabih, ?Dynamic Programming and Graph Algorithms in Computer Vision,?
IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, pp. 721?740, Apr. 2011.
[12] E. Keogh and C. A. Ratanamahatana, ?Exact indexing of dynamic time warping,? Knowl Inf Syst, vol. 7,
pp. 358?386, May 2004.
[13] R. Ahuja, T. Magnanti, and J. Orlin, Network Flows: Theory, Algorithms, and Applications. Prentice Hall,
Feb. 1993.
[14] Y. Boykov and V. Kolmogorov, ?An experimental comparison of min-cut/max- flow algorithms for
energy minimization in vision,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26,
pp. 1124?1137, Sept. 2004.
[15] D. Chudova, S. Gaffney, and P. Smyth, ?Probabilistic models for joint clustering and time-warping
of multidimensional curves,? in Proceedings of the Nineteenth conference on Uncertainty in Artificial
Intelligence, pp. 134?141, Morgan Kaufmann Publishers Inc., 2002.
[16] P. Hogeweg and B. Hesper, ?The alignment of sets of sequences and the construction of phyletic trees: an
integrated method,? Journal of molecular evolution, vol. 20, no. 2, pp. 175?186, 1984.
[17] F. Sievers, A. Wilm, D. Dineen, T. J. Gibson, K. Karplus, W. Li, R. Lopez, H. McWilliam, M. Remmert,
J. S?ding, and others, ?Fast, scalable generation of high-quality protein multiple sequence alignments
using Clustal Omega,? Molecular systems biology, vol. 7, no. 1, p. 539, 2011.
[18] S. Uchida, M. Fukutomi, K. Ogawara, and Y. Feng, ?Non-Markovian dynamic time warping,? in 2012 21st
International Conference on Pattern Recognition (ICPR), pp. 2294?2297, Nov. 2012.
[19] H. Ishikawa and D. Geiger, ?Occlusions, discontinuities, and epipolar lines in stereo,? in Computer Vision
? ECCV?98, Lecture Notes in Computer Science, pp. 232?248, Springer Berlin Heidelberg, June 1998.
DOI: 10.1007/BFb0055670.
[20] F. R. Schmidt, E. T?ppe, D. Cremers, and Y. Boykov, ?Efficient Shape Matching Via Graph Cuts,? in
Energy Minimization Methods in Computer Vision and Pattern Recognition, no. 4679 in Lecture Notes in
Computer Science, pp. 39?54, Springer Berlin Heidelberg, Aug. 2007.
[21] B. Korte, J. Vygen, B. Korte, and J. Vygen, Combinatorial optimization, vol. 2. Springer, 2012.
[22] Y. Wang, G. Shi, D. J. Miller, Y. Wang, G. Broussard, Y. Wang, L. Tian, and G. Yu, ?FASP: A machine
learning approach to functional astrocyte phenotyping from time-lapse calcium imaging data,? in 2016
IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 351?354, Apr. 2016.
9
| 6269 |@word middle:2 advantageous:1 seems:1 simulation:6 paid:1 fortuitous:1 necessity:1 series:23 contains:1 liquid:1 denoting:1 tuned:1 interestingly:2 batista:1 outperforms:1 existing:3 comparing:1 yet:1 must:1 readily:3 written:1 distant:1 additive:1 subsequent:1 shape:2 designed:1 progressively:1 v:2 intelligence:3 prohibitive:1 selected:1 signalling:1 short:1 filtered:1 provides:1 node:17 contribute:1 location:4 preference:1 simpler:1 along:2 direct:1 berndt:1 symposium:1 pairing:1 lopez:1 prove:1 magnanti:1 pairwise:1 expected:1 dist:5 examine:1 multi:1 globally:2 little:1 equipped:1 increasing:2 becomes:1 spain:1 xx:1 underlying:1 bounded:2 estimating:1 mass:1 moreover:1 what:1 cm:2 minimizes:2 developed:2 unified:1 finding:1 guarantee:2 temporal:2 every:1 multidimensional:1 act:1 exactly:1 ensured:1 k2:5 g0n:11 wrong:1 yn:16 positive:3 retention:1 before:1 engineering:1 consequence:1 encoding:2 mach:1 analyzing:2 path:37 approximately:1 might:1 equivalence:3 conversely:1 shaded:1 co:1 tian:2 range:1 bi:1 directed:16 practical:2 unique:1 area:8 gibson:1 drug:1 matching:5 word:1 induce:1 protein:2 get:3 cannot:2 close:2 prentice:1 applying:3 conventional:4 imposed:1 demonstrated:1 center:2 equivalent:3 shi:1 attention:1 flexibly:1 focused:3 formulate:1 survey:1 identifying:1 assigns:1 rule:1 insight:1 utilizing:1 searching:1 construction:4 suppose:2 cultured:1 user:2 play:1 programming:5 exact:1 smyth:1 element:1 recognition:4 cut:41 observed:2 role:1 ding:1 wang:7 solved:5 capture:1 region:2 ensures:3 connected:6 removed:1 ran:1 transforming:1 complexity:1 rigorously:1 dynamic:14 engr:1 efficiency:1 completely:1 sink:2 easily:1 joint:8 represented:2 various:3 kolmogorov:1 fast:1 effective:1 doi:1 artificial:2 labeling:14 neighborhood:4 quite:1 heuristic:3 whose:1 encoded:1 solve:2 distortion:1 supplementary:2 otherwise:2 nineteenth:1 statistic:1 jointly:7 noisy:1 obviously:1 sequence:4 propose:1 interaction:1 poskanzer:2 neighboring:10 aligned:1 flexibility:4 gold:1 seattle:1 convergence:2 cluster:1 optimum:2 produce:1 illustrate:3 develop:1 omega:1 measured:1 aug:1 solves:1 auxiliary:2 metabolite:1 come:1 indicate:4 convention:1 implies:1 direction:7 guided:1 annotated:1 chudova:2 material:1 clustered:1 generalization:1 biological:1 mathematically:1 keogh:4 extension:2 hold:2 around:1 considered:2 ground:2 hall:1 great:1 claim:1 lm:5 major:1 smallest:1 estimation:4 applicable:1 label:1 combinatorial:1 knowl:1 largest:2 vice:2 clustal:1 weighted:2 minimization:2 gaussian:2 rather:1 pn:33 avoid:1 phenotyping:1 corollary:1 l0:3 june:2 improvement:2 likelihood:1 tech:3 nonmarkovian:1 contrast:1 industrial:2 sigkdd:1 inference:1 dependent:1 typically:1 integrated:3 mth:1 favoring:1 originating:1 transformed:5 labelings:1 pixel:12 overall:1 classification:3 flexible:4 dual:20 denoted:1 among:4 retaining:1 constrained:1 special:2 orange:1 integration:2 spatial:2 construct:3 once:2 psu:1 biology:1 represents:3 broad:1 ishikawa:1 unsupervised:1 yu:2 minf:1 foreground:1 discrepancy:1 others:2 fundamentally:1 composed:1 simultaneously:3 intell:1 occlusion:1 friedman:1 interest:1 gaffney:1 mining:7 eib:4 severe:1 alignment:16 mixture:1 pc:1 primal:20 edge:39 fu:1 partial:2 en0:3 conduct:1 indexed:4 euclidean:2 tree:1 initialized:3 re:2 circle:1 karplus:1 e0:2 stopped:1 column:2 modeling:4 gn:9 markovian:1 cost:23 introducing:1 deviation:1 subset:1 delay:4 successful:1 virginia:3 dependency:7 kn:10 synthetic:4 st:1 fundamental:1 international:5 siam:2 probabilistic:1 vm:5 off:3 connecting:1 ym:1 clifford:1 aaai:1 unavoidable:1 satisfied:1 containing:1 choose:1 derivative:2 leading:1 li:1 syst:1 potential:1 includes:1 inc:1 cremers:1 later:1 performed:2 analyze:1 linked:2 red:3 wave:1 maintains:1 orlin:1 minimize:1 purple:2 chart:1 square:2 accuracy:1 variance:5 largely:1 efficiently:4 miller:2 correspond:1 kaufmann:1 directional:1 vp:2 yellow:1 bayesian:1 thumb:1 critically:1 researcher:2 classified:1 definition:13 against:1 energy:2 pp:18 obvious:1 resultant:1 associated:3 proof:4 naturally:1 di:1 newly:1 proved:1 treatment:1 popular:1 knowledge:3 cap:1 improves:1 ubiquitous:1 segmentation:1 organized:1 actually:1 back:2 bidirectional:1 dt:2 vygen:2 planar:3 formulation:2 generality:1 biomedical:1 hand:2 working:1 propagation:6 continuity:3 quality:1 perhaps:1 indicated:1 scientific:1 name:1 building:1 normalized:2 evolution:1 hence:2 spatially:3 iteratively:1 lapse:3 deal:1 eg:1 adjacent:2 during:1 davis:1 m:5 generalized:1 demonstrate:1 tn:3 bring:1 image:3 boykov:2 common:1 functional:1 physical:2 cmf:5 discussed:1 occurred:1 significant:1 surround:1 versa:2 smoothness:1 tuning:1 rd:1 grid:5 pm:7 similarly:3 mathematics:2 dot:1 longer:3 align:5 aligning:4 add:4 feb:2 moderate:1 inf:1 reverse:11 certain:2 binary:1 vt:3 g0m:1 seen:1 minimum:4 morgan:1 impose:1 syndrome:1 shortest:2 dashed:3 signal:1 multiple:19 windowing:2 afterwards:1 segmented:1 cross:1 long:1 lin:1 clinical:1 equally:1 molecular:2 feasibility:1 impact:2 variant:2 neuro:1 scalable:1 liao:1 vision:4 iteration:2 sometimes:1 cell:2 spectrometry:1 background:1 want:1 separately:2 whereas:1 addition:4 source:3 publisher:1 rest:1 yue:1 recording:1 subject:2 induced:5 mature:1 thing:1 undirected:1 member:2 flow:29 effectiveness:1 integer:2 call:1 wn:3 subplots:1 finish:1 pennsylvania:1 identified:1 suboptimal:2 hastie:1 economic:1 cn:5 shift:3 expression:1 ultimate:1 reuse:1 effort:2 stereo:2 korte:2 cleaner:1 zabih:1 dna:1 exist:1 problematic:1 estimated:5 tibshirani:1 kira:2 blue:3 hyperparameter:6 promise:1 vol:12 express:1 group:2 four:1 drawn:2 clarity:3 imaging:8 graph:68 year:1 convert:2 everywhere:1 powerful:2 ca2:1 uncertainty:1 almost:1 vn:7 geiger:1 scaling:1 capturing:1 layer:2 guaranteed:2 nonnegative:1 activity:1 occur:1 precisely:1 constraint:5 infinity:1 x2:1 uchida:2 sake:1 interpolated:1 optimality:4 extremely:1 min:6 formulating:1 icpr:1 smaller:1 contradicts:1 rev:1 modification:1 explained:1 restricted:1 indexing:1 heart:1 taken:1 ln:14 equation:7 previously:2 discus:2 needed:1 know:1 generalizes:1 available:1 apply:2 indirectly:1 generic:1 appropriate:1 enforce:1 alternative:1 encounter:1 schmidt:1 original:2 denotes:1 clustering:6 assumes:1 include:1 graphical:5 instant:1 spn:1 restrictive:1 k1:5 build:2 society:2 feng:1 warping:76 objective:1 question:1 quantity:1 occurs:2 added:4 strategy:1 primary:1 degrades:1 volterra:1 diagonal:7 surrogate:1 distance:12 reversed:1 link:1 separate:1 capacity:9 berlin:3 participate:1 trivial:1 assuming:1 length:2 besides:1 relationship:4 innovation:1 unfortunately:1 stated:1 design:1 anal:1 calcium:5 unknown:1 allowing:1 observation:1 finite:1 extended:1 incorporated:2 smoothed:1 community:2 david:1 introduced:2 pair:41 specified:1 optimized:1 vn0:3 jeong:2 california:2 ucdavis:1 barcelona:1 flowchart:2 nip:1 established:1 trans:1 discontinuity:1 suggested:1 bar:1 below:1 pattern:16 xm:1 usually:1 challenge:1 max:19 including:1 green:4 interpretability:1 video:1 power:2 critical:3 epipolar:1 residual:1 nth:3 zhu:1 movie:7 dtw:37 coupled:1 sept:2 sn:5 prior:2 review:1 discovery:1 relative:3 fully:2 expect:3 loss:1 lecture:2 generation:1 proportional:1 fluorescent:1 querying:1 isbi:1 validation:1 degree:1 sufficient:1 consistent:2 uncorrelated:1 eccv:1 course:1 penalized:1 biomolecules:1 side:2 guide:1 warren:1 wide:1 neighbor:1 face:2 correspondingly:1 felzenszwalb:1 sparse:1 curve:41 dimension:1 xn:17 boundary:2 valid:12 rich:2 ignores:1 collection:1 san:1 far:1 transaction:1 nov:3 derivable:1 unreliable:1 monotonicity:3 global:6 overfitting:1 active:1 assumed:1 francisco:1 consuming:1 spatio:1 alternatively:2 subsequence:1 iterative:1 ratanamahatana:1 nature:1 pazzani:1 obtaining:1 heidelberg:2 sp:2 apr:3 spread:1 neurosci:1 noise:11 hyperparameters:5 profile:10 x1:1 fig:17 chromatography:1 en:3 ahuja:1 fasp:1 lc:5 sub:2 position:2 lie:1 weighting:2 theorem:3 emphasizing:1 down:1 specific:2 burden:1 importance:1 supplement:2 dissimilarity:3 nat:1 lmf:9 smoothly:1 expressed:1 unexpected:1 springer:5 corresponds:4 truth:2 satisfies:1 extracted:2 trillion:1 acm:2 goal:2 change:2 included:1 infinite:3 lemma:14 pas:1 e:2 experimental:2 exception:1 internal:1 noisier:1 brevity:1 bioinformatics:1 ucsf:1 violated:2 tsai:2 incorporate:1 evaluate:1 mcmc:1 |
5,824 | 627 | History-dependent Attractor Neural
Networks
Isaac Meilijson
Eytan Ruppin
School of Mathematical Sciences
Raymond and Beverly Sackler Faculty of Exact Sciences
Tel-A viv University, 69978 Tel-Aviv, Israel.
Abstract
We present a methodological framework enabling a detailed description of the performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, we find that performance is improved when a history-based
term is included in the neuron's dynamics. A further enhancement
of the network's performance is achieved by judiciously choosing
the censored neurons (those which become active in a given iteration) on the basis of the magnitude of their post-synaptic potentials. The contribution of biologically plausible, censored, historydependent dynamics is especially marked in conditions of low firing
activity and sparse connectivity, two important characteristics of
the mammalian cortex. In such networks, the performance attained is higher than the performance of two 'independent' iterations, which represents an upper bound on the performance of
history-independent networks.
1
Introduction
Associative Attractor Neural Network (ANN) models provide a theoretical background for the understanding of human memory processes. Considerable effort has
been devoted recently to narrow the gap between the original ANN Hopfield model
(Hopfield 1982) and the realm of the structure and dynamics of the brain (e.g.,
Amit & Tsodyks 1991). In this paper, we contribute to the examination of the
performance of ANNs under cortical-like architectures, where neurons are typically
572
History-dependent Attractor Neural Networks
connected to only a fraction of their neighboring neurons, and have a low firing
activity (Abeles et. al. 1990). We develop a general framework for examining various signalling mechanisms (firing functions) and activation rules (the mechanism
for deciding which neurons are active in some interval of time).
The Hopfield model is based on memoryless dynamics, which identify the notion of
'post-synaptic potential' with the input field received by a neuron from the neurons
active in the current iteration. We follow a Bayesian approach under which the
neuron's signalling and activation decisions are based on the current a-posteriori
probabilities assigned to its two possible true memory states, ?1. As we shall
see, the a-posteriori belief in +1 is the sigmoidal function evaluated at a neuron's
generalized field, a linear combination of present and past input fields. From a
biological perspective, this history-dependent approach is strongly motivated by
the observation that the time span of the different channel conductances in a given
neuron is very broad (see Lytton 1991 for a review). While some channels are active
for only microseconds, some slow-acting channels may remain open for seconds.
Hence, a synaptic input currently impending on the neuron may influence both its
current post-synaptic membrane potential, and its post-synaptic potential at some
future time.
2
The Model
The neural network model presented is characterized as follows. There are m 'random memories' elJ , 1 < iJ ::; m, and one 'true' memorye m +1 = e. The (m + 1)N
entries of these memories are independent and identically distributed, with equally
likely values of +1 or --1. The initial state X has similarityP(Xi ei)
(1+?)/2,
P(Xi = -ed = (1 - ?)/2, independently of everything else. The weight of the
synaptic connection between neurons i and j (i -j:. j) is given by the simple Hebbian
law
= =
m+l
Wij* =
L elJielJ j
(1)
1J=1
Each neuron receives incoming synaptic connections from a random choice of K of
the N neurons in the network in such a way that if a synapse exists, the synapse
in the opposite direction exists with probability r, the reflexivity parameter. In the
first iteration, a random sample of Ll neurons become active (i.e., 'fire'), thus on
the average nl = LlK/N neurons update the state of each neuron. The field //1)
of neuron i in the first iteration is
N
j I.(l) -- ~ ~
~
w.IJ..* [..I?(l)X?
IJ 1
J'
(2)
nl j=1
where Iij denotes the indicator function of the event 'neuron i receives a synaptic
t ) denotes the indicator function of the event
connection from neuron j', and
'neuron j is active in the t'th iteration'. Under the Bayesian approach we adopt,
neuron i assigns an a-priori probability ,\/0) = p(ei = +1\Xi)
(1 + {Xi)/2 to
having +1 as the correct memory state and evaluates the corresponding a-posteriori
probability ,\/1) = p(ei = +1\Xi , fi(I), which turns out to be expressible as the
I/
=
573
574
Meilijson and Ruppin
sigmoidal function 1/( 1 + exp( -2x)) evaluated at some linear combination of Xi
and fi(I).
In the second iteration the belief A/I) of a neuron determines the probability that
the neuron is active. We illustrate two extreme modes for determining the active
updating neurons, or activation: the random case where L2 active neurons a.re
randomly chosen, independently of the strength of their fields, and the censored
case, which consists of selecting the L2 neurons whose belief belongs to some set.
The most appealing censoring rule from the biological point of view is tail-censoring,
where the active neurons are those with the strongest beliefs. Performance, however,
is improved under interval-censoring, where the active neurons are those with midrange beliefs, and even further by combining tail and interval censoring into a hybrid
rule.
Let n2 = L2 f{ / N . The activation rule is given by a function C : [~, 1] -+ [0, 1] .
Neuron j, with belief A/I) in +1, becomes active with probability C(maxp/I), 1Aj (1?)), independently of everything else. For example, the random case corresponds
to C
and the tail-censored case corresponds to C(A) = 1 or 0 depending on
whether max(A, 1 - A) exceeds some threshold. The output of an active neuron j
is a signal function S(A/ I ?) of its current belief. The field f/ 2 ) of neuron i in the
second iteration is
=In
N
f ,?(2) -- ~ ""
L...J w.I}.. * /-I }?I}?(2)S(A } ?(1?)
n2
?
(3)
j=1
Neuron i now evaluates its a-posteriori belief
Ai(2) = p(ei = +1IXi,Ii(l),f/I\I/2?). As we shall see, Ai(2) is, again, the
sigmoidal function evaluated at some linear combination of the neuron 's history
Xi, XJi(I), f/ I ) and 1/ 2). In contrast to the common history-independent Hopfield
dynamics where the signal emitted by neuron j in the t'th iteration is a function
of /jet-I) only, Bayesian history-dependent dynamics involve signals and activation
rules which depend on the neuron's generalized field, obtained by adaptively incorporating /j(t-I) to its previous generalized field. The final state X/ 2 ) of neuron i
is taken as -lor +1, depending on which of 1 - A/ 2 ) and A/ 2 ) exceeds 1/2.
For nI/N, ndN, m/N, K/N constant, and N large, we develop explicit expressions
for the performance of the network, for any signal function (e.g., SI(A) = Sgn(A1/2) or S2(A) = 2A - 1) and activation rule. Performance is measured by the final
overlap e" = ~ L eiX/2) (or equivalently by the final similarity (1+e")/2). Various
possible combina.tions of activation modes and signal functions described above are
then examined under varying degrees of connectivity and neuronal activity.
3
Single-iteration optimization: the Bayesian approach
Consider the following well known basic fact in Bayesian Hypothesis Testing,
Lemma 1
Express the prior probability as
1
= 1) = ---:-1 + e- 2x
pee
(4)
History-dependent Attractor Neural Networks
and assume an observable Y which, given
e, is distributed according to
Yle", N(lle, (12)
(5)
~or some constants Il E (-00,00) and (12 E (0,00). Then the posterior probability
IS
P(CI,
1
= llY = y) = 1 + e-2?
) ).
x+ 1-'/0'2 y
Applying this Lemma to Y =
~i(1)
fi(I),
with Il = e and (12
= .ill.
nl
(6)
=al, we see that
= p(ei = 11Xi , fi(I)) = __
---:-_1_--:-:---:1 + e- 2f ("Y(f)X,+f,(1)/0:'1)
,
if
(7)
where i( f) = log ~:!:~. Hence, pee = 11Xi , f/ 1 )) > 1/2 if and only if 1/1)
an( f)Xi > O. The single-iteration performance is then given by the similarity
+
(8)
1;f~ (;." +1(f)fo1) + 1; '~ (;." -1(f)fo1)
= Q(e, at)
where <I> is the standard normal distribution function. The Hopfield dynamics, modified by redefining W ii as mi(e) (in the Neural Network terminology) is equivalent
(in the Bayesia.n jargon) to the obvious optimal policy, under which a neuron sets
for itself the sign with posterior probability above 1/2 of being correct.
4
Two-iterations optiInization
For mathematical convenience, we will relate signals and activation rules to normalized generalized fields rather than to beliefs . We let
h ( x ) -- S(1
1
+ e- 2CX ) , p( x ) --
C
(( + 1
max
1
1))
e- 2cx ' 1 - 1 + e- 2cx
(9)
for C = f/ -Jal. The signal function h is assumed to be odd, and the activation
function p, even .
In order to evaluate the belief ~/2), we need the conditional distribution of li(2)
given Xi, 1/ 1 ) and 1/1), for ei = -1 or ei = + 1. We adopt the working a.ssumption
that the pair of random variables (Ii (1), h(2)) has a bivariate normal distribution
1/ 1 ) and Xi, with
1/ 1 ) and Xi affecting means but not va.riances or
given
correlations. Under this working assumption, fi(2) is conditionally normal given
(ei,1/ 1 ),Xi,I/ 1 )), with constant variance and a mean which we will identify. This
working assumption allows us to model performance via the following well known
regression model.
ei,
ei,
575
576
Meilijson and Ruppin
Lemma 2
If two random variables U and V with finite variances are such that E(VIU) is a
linear function of U and Var(VIU) is constant, then
E(VIU) = E(V)
+ Cov(U, V) (U Var(U)
E(U?
(10)
and
Letting U = fi(l) and V = f/ 2), we obtain
A/ 2 )
= pee; = llXi, h(l>, li(l), 1/2 ? =
(12)
1
1 + exp{ -2 [E (// 1)la1 + {(E)X i ) +
f*;;af
(fi(2) - bXa/ 1) - a fi (1?)]}
-
1
1 + exp{ -2 [
(q(
E) -
b(f*T;af) I/ 1
?) Xi + (afl - a(f:;af?) li(l) + f* ;;af 1/ 2 )] }
which is the sigmoidal function evaluated at some generalized field. Expression (12)
shows that the correct definition of a final state Xi (2), as the most likely value among
+1 or -1, is
_ b(E" - af)I.Cl?) . (~_ a(E" - aE?)
X ,.(2) -_ Sgn [( E{ ()
f
2
I
X, +
2
al
T
T
J.".(1) + E" - 2 aE f .(2)j
h
I
T
(13)
and the performance is given by
p(X/2) =
eilei) =
1; (R +
E4>
{(E)N)
1 (R -{(E)~)
+ ~ E4>
=
(14)
Q( E, a")
where the one-iteration performance function Q is defined by (8), and
m
a " =m
n"
(15)
We see that the performance is conveniently expressed as the single-iteration optimal
performance, had this iteration involved n" rather than nl sampled neurons. This
formula yields a numerical and analytical tool to assess the network 's performance
with different signal functions, activation rules and architectures. Due to space
restrictions, the identification of the various parameters used in the above formulas
is not presented. However, it can be shown that in the sparse limit arrived at
by fixing al and a2 and letting both J{ 1m and N IJ{ go to infinity, it is always
better to replace an iteration by two smaller ones. This suggests that Bayesian
History-dependent Attractor Neural Networks
updating dynamics should be essentially asynchronous.
two-iterations performance Q
(c, -L+-L(~)2)
CIt
1
cw2
We also show that the
is superior to the performance
~
Q (2Q( c, al) - 1, (2) of two independent optimal single iterations.
Heuristics on activation and signalling
5
U
1
-1
-4
Figure 1: A typical plot of R(x) = ?l(X)/?O(x). Network parameters are N
K
500, n1
n2
50 and m 10.
=
= =
=
= 500,
By (14) and (15), performance is mostly determined by the magnitude of (col< - ac)2.
It can be shown that
(16)
and
~a =
1
00
(17)
p(x)?o(x)dx
where ?l and ?o are some specific linear combinations of Gaussian densities and
their derivatives , and ~ a = n2/ K is the activity level. High performance is achieved
by maximizing over p and possibly over h the absolute value of expression (16)
~eeping (17) fixed. In complete analogy to Hypothesis ~esting in Statistics, where
Wa takes the role of level of significance and (c? - ac)wa the role of power, p(x)
should be 1 or a (activate the neuron or don't) depending on whether the field
value x is such that the likelihood ratio h(X)?l(X)/?O(x) is above or below a given
threshold, determined by (17). Omitting details, the ratio R(x)
?l(X)/?O(x)
looks as in figure 1, and converges to -00 as x --+ 00 .
=
We see that there are three reasonable ways to make the ratio h(X)?l(X)/?O(x)
large: we can take a negative threshold such as t1 in figure 1, activate all neurons
with generalized field exceeding /33 (tail-censoring) and signal hex) = -Sgn(x),
577
578
Meilijson and Ruppin
or take a positive threshold such as t2 and activate all neurons with field value
between /31 and /32 (interval-censoring) and signal h(x) = Sgn(x). Better still, we
can consider the hybrid signalling-censoring rule: Activate all neurons with absolute
field value between /31 and /32, or beyond /33' The first group should signal their
preferred sign, while those in the second group should signal the sign opposite to
the one they so strongly believe in !
6
Numerical results
Performance
Random activation
Tail censoring
Intervalj'Hybrid censoring
Hopfield - zero diagonal
Independent ,( () diagonal
Independent zero diagonal
predicted
0.955
0.972
0.975
-
0.96
0.913
Table 1: Sparsely connected, low activity network: N
20,m = 5.
I
?
experimental
0.951
0.973
0.972
0.902,0.973
-
I i '
= 1500, J{ = 50, nl = n2 =
'.-1 :)..~
i
../
. ??f
....:...... i
.~.
,
'
i ..... ~. . . . . . .
0.95
~
','
....
~.
............................ .
.
"'
..
.......?( ........
-'-...
.' .
0.85
.....
- - First Iteration
.... Random Activation
> ? ? ?? Tail-censoring
.... .- .. Interval-censoring
.......: Hybrid censoring-signalling
0.80 L-~_--'-_~_'--~_-'-_-'----.J'--~_--'
20000.0
40000.0
60000.0
80000.0 100000.0
0.0
K
Figure 2: Performance of a large-scale cortical-like 'columnar' ANN, at different
values of connectivity J{, for initial similarity 0.75. N = 10 5 , nl = n2 = 200,
m = 50. The horizontal line denotes the performance of a single iteration.
History-dependent Attractor Neural Networks
Our theoretical performance predictions show good correspondence with simulation results, already at fairly small-scale networks. The superiority of historydependent dynamics is apparent. Table 1 shows the performance achieved in a
sparsely-connected network. The predicted similarity after two iterations is reported, starting from initial similarity 0.75, and compared with experimental results
averaged over 100 trials.
Figure 2 illustrates the theoretical two-iterations performance of large, low-activity
'cortical-like' networks, as a function of connectivity. We see that interval-censoring
can maintain high performance throughout the connectivity range. The performance of tail-censoring is very sensitive to connectivity, almost achieving the performance of interval censoring at a narrow low-connectivity range, and becoming
optimal only at very high connectivity. The superior hybrid rule improves on the
others only under high connectivity. As a cortical neuron should receive the concomitant firing of about 200 - 300 neurons in order to be activated (Treves & Rolls
1991), we have set n = 200. We find that the optimal connectivity per neuron, for
biologically plausible tail-censoring activation, is of the same order of magnitude as
actual cortical connectivity. The actual number nN/ K of neurons firing in every
iteration is about 5000, which is in close correspondence with the evidence suggesting that about 4% of the neurons in a module fire at any given moment (Abeles et.
a!. 1990).
References
[1] J.J. Hopfield. Neural networks and physical systems with emergent collective
abilities. Proc. Nat. Acad. Sci. USA, 79:2554,1982.
[2] D. J. Amit and M. V. Tsodyks. Quantitative study of attractor neural network retrieving at low spike rates: I. substrate-spikes, rates and neuronal gain.
Network, 2:259-273, 1991.
[3] M. Abeles, E. Vaadia, and H. Bergman. Firing patterns of single units in the
prefrontal cortex and neural network models. Network, 1:13-25, 1990.
[4] W. Lytton. Simulations of cortical pyramidal neurons synchronized by inhibitory
interneurons. J. Neurophysiol., 66(3):1059-1079, 1991.
[5] A. Treves and E. T. Rolls. What determines the capacity of autoassociative
memories in the brain? Network, 2:371-397, 1991.
579
| 627 |@word trial:1 faculty:1 open:1 simulation:2 moment:1 initial:3 selecting:1 past:1 current:4 activation:14 si:1 dx:1 afl:1 numerical:2 plot:1 update:1 signalling:5 contribute:1 sigmoidal:4 lor:1 mathematical:2 become:2 retrieving:1 consists:1 xji:1 brain:2 actual:2 becomes:1 israel:1 what:1 quantitative:1 every:1 unit:1 superiority:1 t1:1 positive:1 limit:1 acad:1 firing:6 becoming:1 examined:1 suggests:1 range:2 averaged:1 testing:1 convenience:1 close:1 influence:1 applying:1 restriction:1 equivalent:1 maximizing:1 go:1 starting:1 independently:3 assigns:1 rule:10 eix:1 notion:1 exact:1 substrate:1 ixi:1 hypothesis:2 bergman:1 updating:2 mammalian:1 sparsely:2 role:2 module:1 tsodyks:2 connected:3 dynamic:9 depend:1 basis:1 neurophysiol:1 hopfield:8 emergent:1 various:3 activate:4 choosing:1 whose:1 heuristic:1 apparent:1 plausible:2 maxp:1 ability:1 cov:1 statistic:1 itself:1 final:4 associative:1 vaadia:1 analytical:1 yle:1 neighboring:1 combining:1 description:1 enhancement:1 converges:1 tions:1 lytton:2 develop:2 illustrate:1 fixing:1 depending:3 measured:1 ij:4 odd:1 school:1 received:1 ac:2 predicted:2 synchronized:1 direction:1 correct:3 human:1 sgn:4 everything:2 biological:2 normal:3 deciding:1 exp:3 adopt:2 a2:1 pee:3 proc:1 currently:1 sensitive:1 tool:1 always:1 gaussian:1 modified:1 rather:2 varying:1 methodological:1 likelihood:1 contrast:1 posteriori:4 dependent:7 nn:1 typically:1 expressible:1 wij:1 among:1 ill:1 priori:1 fairly:1 field:14 having:1 represents:1 broad:1 look:1 future:1 t2:1 others:1 randomly:1 fire:2 attractor:8 n1:1 maintain:1 conductance:1 interneurons:1 extreme:1 nl:6 activated:1 devoted:1 reflexivity:1 censored:4 re:1 theoretical:3 entry:1 examining:1 reported:1 abele:3 adaptively:1 density:1 connectivity:11 again:1 possibly:1 prefrontal:1 derivative:1 li:3 suggesting:1 potential:4 view:1 meilijson:4 contribution:1 ass:1 il:2 ni:1 roll:2 variance:2 beverly:1 characteristic:1 yield:1 identify:2 bayesian:7 ndn:1 identification:1 history:11 anns:1 strongest:1 synaptic:8 ed:1 definition:1 evaluates:2 involved:1 isaac:1 obvious:1 mi:1 sampled:1 gain:1 realm:1 improves:1 attained:1 higher:1 follow:1 improved:2 synapse:2 evaluated:4 strongly:2 bxa:1 correlation:1 working:3 receives:2 horizontal:1 ei:10 mode:2 aj:1 believe:1 aviv:1 usa:1 omitting:1 normalized:1 true:2 hence:2 historydependent:2 assigned:1 memoryless:1 jargon:1 conditionally:1 ll:1 generalized:6 arrived:1 complete:1 ruppin:4 recently:1 fi:8 common:1 superior:2 physical:1 tail:8 ai:2 had:1 cortex:2 similarity:5 posterior:2 perspective:1 belongs:1 signal:12 ii:3 hebbian:1 exceeds:2 jet:1 characterized:1 af:5 post:4 equally:1 a1:1 va:1 prediction:1 basic:1 regression:1 ae:2 essentially:1 lly:1 iteration:24 achieved:3 receive:1 background:1 affecting:1 interval:7 else:2 pyramidal:1 emitted:1 identically:1 architecture:2 opposite:2 judiciously:1 whether:2 motivated:1 expression:3 effort:1 jal:1 autoassociative:1 detailed:1 involve:1 cit:1 inhibitory:1 sign:3 impending:1 per:1 shall:2 express:1 group:2 terminology:1 threshold:4 achieving:1 fraction:1 throughout:1 reasonable:1 almost:1 decision:1 bound:1 sackler:1 correspondence:2 activity:6 strength:1 infinity:1 span:1 according:1 combination:4 membrane:1 smaller:1 remain:1 appealing:1 biologically:2 taken:1 turn:1 mechanism:2 letting:2 original:1 denotes:3 especially:1 amit:2 already:1 spike:2 diagonal:3 sci:1 capacity:1 ratio:3 concomitant:1 equivalently:1 mostly:1 relate:1 negative:1 collective:1 policy:1 upper:1 neuron:49 observation:1 enabling:1 finite:1 treves:2 pair:1 connection:3 redefining:1 narrow:2 viu:3 beyond:1 below:1 pattern:1 max:2 memory:6 belief:10 power:1 event:2 overlap:1 examination:1 hybrid:5 indicator:2 fo1:2 raymond:1 review:1 understanding:1 l2:3 prior:1 determining:1 law:1 analogy:1 var:2 degree:1 censoring:16 asynchronous:1 hex:1 lle:1 absolute:2 sparse:2 distributed:2 cortical:6 viv:1 observable:1 preferred:1 active:13 incoming:1 llk:1 assumed:1 xi:16 don:1 table:2 channel:3 tel:2 cl:1 significance:1 s2:1 n2:6 neuronal:2 slow:1 iij:1 ssumption:1 explicit:1 exceeding:1 col:1 e4:2 formula:2 midrange:1 specific:1 evidence:1 bivariate:1 exists:2 incorporating:1 ci:1 magnitude:3 nat:1 illustrates:1 gap:1 columnar:1 cx:3 likely:2 elj:1 conveniently:1 expressed:1 corresponds:2 determines:2 conditional:1 marked:1 llxi:1 ann:4 microsecond:1 replace:1 considerable:1 included:1 typical:1 determined:2 acting:1 lemma:3 eytan:1 experimental:2 cw2:1 evaluate:1 |
5,825 | 6,270 | Stochastic Multiple Choice Learning for
Training Diverse Deep Ensembles
Stefan Lee
Virginia Tech
[email protected]
Senthil Purushwalkam
Carnegie Mellon University
[email protected]
David Crandall
Indiana University
[email protected]
Michael Cogswell
Virginia Tech
[email protected]
Viresh Ranjan
Virginia Tech
[email protected]
Dhruv Batra
Virginia Tech
[email protected]
Abstract
Many practical perception systems exist within larger processes that include interactions with users or additional components capable of evaluating the quality of
predicted solutions. In these contexts, it is beneficial to provide these oracle mechanisms with multiple highly likely hypotheses rather than a single prediction. In this
work, we pose the task of producing multiple outputs as a learning problem over an
ensemble of deep networks ? introducing a novel stochastic gradient descent based
approach to minimize the loss with respect to an oracle. Our method is simple
to implement, agnostic to both architecture and loss function, and parameter-free.
Our approach achieves lower oracle error compared to existing methods on a wide
range of tasks and deep architectures. We also show qualitatively that the diverse
solutions produced often provide interpretable representations of task ambiguity.
1
Introduction
Perception problems rarely exist in a vacuum. Typically, problems in Computer Vision, Natural
Language Processing, and other AI subfields are embedded in larger applications and contexts. For
instance, the task of recognizing and segmenting objects in an image (semantic segmentation [6])
might be embedded in an autonomous vehicle [7], while the task of describing an image with a
sentence (image captioning [18]) might be part of a system to assist visually-impaired users [22, 30].
In these scenarios, the goal of perception is often not to generate a single output but a set of plausible
hypotheses for a ?downstream? process, such as a verification component or a human operator. These
downstream mechanisms may be abstracted as oracles that have the capability to pick the correct
solution from this set. Such a learning setting is called Multiple Choice Learning (MCL) [8], where
the goal for the learner is to minimize oracle loss achieved by a set of M solutions. More formally,
given a dataset of input-output pairs {(xi , yi ) | xi ? X , yi ? Y}, the goal of classical supervised
learning is to search for a mapping F : X ? Y that minimizes a task-dependent loss ` : Y ?Y ? R+
capturing the error between the actual labeling yi and predicted labeling y?i . In this setting, the learned
function f makes a single prediction for each input and pays a penalty for that prediction. In contrast,
Multiple Choice Learning seeks to learn a mapping g : X ? Y M that produces M solutions
Y?i = (?
yi1 , . . . , y?iM ) such that oracle loss minm ` (yi , y?im ) is minimized.
In this work, we fix the form of this mapping g to be the union of outputs from an ensemble of
predictors such that g(x) = {f1 (x), f2 (x), . . . , fM (x)}, and address the task of training ensemble
members f1 , . . . , fM such that g minimizes oracle loss. Under our formulation, different ensemble
members are free to specialize on subsets of the data distribution, so that collectively they produce a
set of outputs which covers the space of high probability predictions well.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Horse
Cow
A
?couple
? of
?birds
? that
?are
?standing
?in
?the
?grass.
A
?bird
?perched
?on
?top
?of
?a
?tree
?branch.
A
?bird
?perched
?on
?a
?tree
?branch
?in
?the
?sky.
Figure 1: Single-prediction based models often produce solutions with low expected loss in the face of ambiguity;
however, these solutions are often unrealistic or do not reflect the image content well (row 1). Instead, we train
ensembles under a unified loss which allows each member to produce different outputs reflecting multi-modal
beliefs (row 2). We evaluate our method on image classification, segmentation, and captioning tasks.
Diverse solution sets are especially useful for structured prediction problems with multiple reasonable
interpretations, only one of which is correct. Situations that often arise in practical systems include:
? Implicit class confusion. The label space of many classification problems is often an arbitrary
quantization of a continuous space. For example, a vision system may be expected to classify
between tables and desks, despite many real-world objects arguably belonging to both classes. By
making multiple predictions, this implicit confusion can be viewed explicitly in system outputs.
? Ambiguous evidence. Often there is simply not enough information to make a definitive prediction.
For example, even a human expert may not be able to identify a fine-grained class (e.g., particular
breed of dog) given an occluded or distant view, but they likely can produce a small set of reasonable
guesses. In such cases, the task of producing a diverse set of possibilities is more clearly defined
than producing one correct answer.
? Bias towards the mode. Many models have a tendency to exhibit mode-seeking behaviors as a
way to reduce expected loss over a dataset (e.g., a conversation model frequently producing ?I
don?t know?). By making multiple predictions, a system can improve coverage of lower density
areas of the solution space, without sacrificing performance on the majority of examples.
In other words, by optimizing for the oracle loss, a multiple-prediction learner can respond to
ambiguity much like a human does, by making multiple guesses that capture multi-modal beliefs.
In contrast, a single-prediction learner is forced to produce a solution with low expected loss in
the face of ambiguity. Figure 1 illustrates how this can produce solutions that are not useful in
practice. In semantic segmentation, for example, this problem often causes objects to be predicted
as a mixture of multiple classes (like the horse-cow shown in the figure). In image captioning,
minimizing expected loss encourages generic sentences that are ?safe? with respect to expected error
but not very informative. For example, Figure 1 shows two pairs of images each having different
image content but very similar, generic captions ? the model knows it is safe to assume that birds are
on branches and that cakes are eaten with forks.
In this paper, we generalize the Multiple Choice Learning paradigm [8, 9] to jointly learn ensembles
of deep networks that minimize the oracle loss directly. We are the first to formalize these ideas in
the context of deep networks and we present a novel training algorithm that avoids costly retraining
[8] of past methods. Our primary technical contribution is the formulation of a stochastic block
gradient descent optimization approach well-suited to minimizing the oracle loss in ensembles of
deep networks, which we call Stochastic Multiple Choice Learning (sMCL). Our formulation is
applicable to any model trained with stochastic gradient descent, is agnostic to the form of the task
dependent loss, is parameter-free, and is time efficient, training all ensemble members concurrently.
We demonstrate the broad applicability and efficacy of sMCL for training diverse deep ensembles
with interpretable emergent expertise on a wide range of problem domains and network architectures,
including Convolutional Neural Network (CNN) [1] ensembles for image classification [17], FullyConvolutional Network (FCN) [20] ensembles for semantic segmentation [6], and combined CNN
and Recurrent Neural Network (RNN) ensembles [14] for image captioning [18]. We provide detailed
analysis of the training and output behaviors of the resulting ensembles, demonstrating how ensemble
member specialization and expertise emerge automatically when trained using sMCL. Our method
outperforms existing baselines and produces sets of outputs with high oracle performance.
2
2
Related Work
Ensemble Learning. Much of the existing work on training ensembles focuses on diversity between
member models as a means to improve performance by decreasing error correlation. This is often
accomplished by resampling existing training data for each member model [27] or by producing
artificial data that encourages new models to be decorrelated with the existing ensemble [21]. Other
approaches train or combine ensemble members under a joint loss [19, 26]. More recently, work of
Hinton et al. [12] and Ahmed et al. [2] explores using ?generalist? network performance statistics to
inform the design of ensemble-of-expert architectures for classification. In contrast, sMCL discovers
specialization as a consequence of minimizing oracle loss. Importantly, most existing methods do
not generalize to structured output labels, while sMCL seamlessly adapts, discovering different
task-dependent specializations automatically.
Generating Multiple Solutions. There is a large body of work on the topic of extracting multiple
diverse solutions from a single model [3, 15, 16, 23, 24]; however, these approaches are designed for
probabilistic structured-output models and are not directly applicable to general deep architectures.
Most related to our approach is the work of Guzman-Rivera et al. [8, 9] which explicitly minimizes
oracle loss over the outputs of an ensemble, formalizing this setting as the Multiple Choice Learning
(MCL) paradigm. They introduce a general alternating block coordinate descent training approach
which requires retraining models multiple times. Vondrick et al. [29] follow a similar methodology to
train multi-modal regressors to predict the feature representations of future frames in video.
Recently, Dey et al. [5] reformulated the problem of generating multiple diverse solutions as a
submodular optimization task in which ensemble members are learned sequentially in a boosting-like
manner to maximize marginal gain in oracle performance. Both these methods require either costly
retraining or sequential training, making them poorly suited to modern deep architectures that can
take weeks to train. To address this serious shortcoming and to provide the first practical algorithm for
training diverse deep ensembles, we introduce a stochastic gradient descent (SGD) based algorithm
to train ensemble members concurrently.
3
Multiple-Choice Learning as Stochastic Block Gradient Descent
We consider the task of training an ensemble of differentiable learners that together produce a set of
solutions with minimal loss with respect to an oracle that selects only the lowest-error prediction.
Notation. We use [n] to denote the set {1, 2, . . . , n}. Given a training set of input-output pairs
D = {(xi , yi ) | xi ? X , yi ? Y}, our goal is to learn a function g : X ? Y M which maps
each input to M outputs. We fix the form of g to be an ensemble of M learners f such that
g(x) = (f1 (x), . . . , fM (x)). For some task-dependent loss `(y, y?), which measures the error
between true and predicted outputs y and y?, we define the oracle loss of g over the dataset D as
LO (D) =
n
X
i=1
min ` (yi , fm (xi )) .
m?[M ]
Minimizing Oracle Loss with Multiple Choice Learning. In order to directly minimize the oracle
loss for an ensemble of learners, Guzman-Rivera et al. [8] present an objective which forms a
(potentially tight) upper-bound. This objective replaces the min in the oracle loss with indicator
variables (pi,m )M
m=1 where pi,m is 1 if predictor m has the lowest error on example i,
argmin
fm ,pi,m
s.t.
n X
M
X
pi,m ` (yi , fm (xi ))
(1)
i=1 m=1
M
X
pi,m = 1,
pi,m ? {0, 1}.
The resulting minimization is a constrained joint optimization over ensemble parameters and datapoint assignments. The authors propose an alternating block algorithm, shown in Algorithm 1, to
approximately minimize this objective. Similar to K-Means or ?hard-EM,? this approach alternates
between assigning examples to their min-loss predictors and training models to convergence on the
partition of examples assigned to them. Note that this approach is not feasible with training deep
networks, since modern architectures [11] can take weeks or months to train a single model once.
3
Figure 2: The MCL approach of [8] (Alg. 1) requires costly retraining while our sMCL method (Alg. 2) works
within standard SGD solvers, training all ensemble members under a joint loss.
Stochastic Multiple Choice Learning. To overcome this shortcoming, we propose a stochastic
algorithm for differentiable learners which interleaves the assignment step with batch updates in
stochastic gradient descent. Consider the partial derivative of the objective in Eq. 1 with respect to
the output of the mth individual learner on example xi ,
?LO
?`(yi , fm (xi ))
= pi,m
.
?fm (xi )
?fm (xi )
(2)
Notice that if fm is the minimum error predictor for example xi , then pi,m = 1, and the gradient
term is the same as if training a single model; otherwise, the gradient is zero. This behavior lends
itself to a straightforward optimization strategy for learners trained by SGD based solvers. For each
batch, we pass the examples through the learners, calculating losses from each ensemble member for
each example. During the backward pass, the gradient of the loss for each example is backpropagated
only to the lowest error predictor on that example (with ties broken arbitrarily).
This approach, which we call Stochastic Multiple Choice Learning (sMCL), is shown in Algorithm 2.
sMCL is generalizable to any learner trained by stochastic gradient descent and is thus applicable to
an extensive range of modern deep networks. Unlike the iterative training schedule of MCL, sMCL
ensembles need only be trained to convergence once in parallel. sMCL is also agnostic to the exact
form of loss function ` such that it can be applied without additional effort on a variety of problems.
4
Experiments
In this section, we present results for sMCL ensembles trained for the tasks and deep architectures
shown in Figure 3. These include CNN ensembles for image classification, FCN ensembles for
semantic segmentation, and a CNN+RNN ensembles for image caption generation.
Baselines. Many existing general techniques for inducing diversity are not directly applicable to deep
networks. We compare our proposed method against:
- Classical ensembles in which each model is trained under an independent loss with differing
random initializations. We will refer to these as Indp. ensembles in figures.
- MCL [8] that alternates between training models to convergence on assigned examples and
allocating examples to their lowest error model. We repeat this process for 5 meta-iterations and
initialize ensembles with (different) random weights. We find MCL performs similarly to sMCL
on small classification tasks; however, MCL performance drops substantially on segmentation and
captioning tasks. Unlike sMCL which can effectively reassign an example once per epoch, MCL
only does this after convergence, limiting its capacity to specialize compared to sMCL. We also
note that sMCL is 5x faster than MCL, where the factor 5 is the result of choosing 5 meta-iterations
(other applications may require more, further increasing the gap.)
- Dey et al. [5] train models sequentially in a boosting-like fashion, each time reweighting examples
to maximize marginal increase of the evaluation metric. We find these models saturate quickly as
the ensemble size grows. As performance increases, the marginal gain and therefore the weights
4
(a) Convolutional classification
model of [1] for CIFAR10 [17]
(b) Fully-convolutional segmentation model of Long et al. [20]
(c) CNN+RNN based captioning
model of Karpathy et al. [14]
Figure 3: We experiment with three problem domains using the various architectures shown above.
approach zero. With low weights, the average gradient backpropagated for stochastic learners drops
substantially, reducing the rate and effectiveness of learning without careful tuning. To compute
weights, [5] requires an error measure bounded above by 1: accuracy (for classification) and IoU
(for segmentation) satisfy this; the CIDEr-D score [28] divided by 10 guarantees this for captioning.
Oracle Evaluation. We present results as oracle versions of the task-dependent performance metrics.
These oracle metrics report the highest score over all outputs for a given input. For example, in
classification tasks, oracle accuracy is exactly the top-k criteria of ImageNet [25], i.e. whether at
least one of the outputs is the correct label. Likewise, the oracle intersection over union (IoU) is the
highest IoU between the ground truth segmentation and any one of the outputs. Oracle metrics allow
the evaluation of multiple-prediction systems separately from downstream re-ranking or selection
systems, and have been extensively used in previous work [3, 5, 8, 9, 15, 16, 23, 24].
Our experiments convincingly demonstrate the broad applicability and efficacy of sMCL for training
diverse deep ensembles. In all three experiments, sMCL significantly outperforms classical ensembles,
Dey et al. [5] (typical improvements of 6-10%), and MCL (while providing a 5x speedup over MCL).
Our analysis shows that the exact same algorithm (sMCL) leads to the automatic emergence of
different interpretable notions of specializations among ensemble members.
4.1 Image Classification
Model. We begin our experiments with sMCL on the CIFAR10 [17] dataset using the small convolutional neural network ?CIFAR10-Quick? provided with the Caffe deep learning framework [13].
CIFAR10 is a ten way classification task with small 32?32 images. For these experiments, the
reference model is trained using a batch size of 350 for 5,000 iterations with a momentum of 0.9,
weight decay of 0.004, and an initial learning rate of 0.001 which drops to 0.0001 after 4000 iterations.
Results. Oracle accuracy for sMCL and baseline ensembles of size 1 to 6 are shown in Figure
4a. The sMCL trained ensembles result in higher oracle accuracy than the baseline methods, and
are comparable to MCL while being 5x faster. The method of Dey et al. [5] performs worse than
independent ensembles as ensemble size grows. Figure 4b shows the oracle loss during training for
sMCL and regular ensembles. The sMCL trained models optimize for the oracle cross-entropy loss
directly, not only arriving at lower loss solutions but also reducing error more quickly.
Interpretable Expertise: sMCL Induces Label-Space Clustering. Figure 4c shows the class-wise
distribution of the assignment of test datapoints to the oracle or ?winning? predictor for an M = 4
sMCL ensemble. The level of class division is striking ? most predictors become specialists for
certain classes. Note that these divisions emerge from training under the oracle loss and are not
hand-designed or pre-initialized in any way. In contrast, Figure 4f show that the oracle assignments
for a standard ensemble are nearly uniform. To explore the space between these two extremes, we
loosen the constraints of Eq. 1 such that the lowest k error predictors are penalized. By varying k
between 1 and the number of ensemble members M , the models transition from minimizing oracle
loss at k = 1 to a traditional ensemble at k = M . Figures 4d and 4e show these results. We find
a direct correlation between the degree of specialization and oracle accuracy, with k = 1 netting
highest oracle accuracy.
4.2 Semantic Segmentation
We now present our results for the semantic segmentation task on the Pascal VOC dataset [6].
Model. We use the fully convolutional network (FCN) architecture presented by Long et al. [20]
as our base model. Like [20], we train on the Pascal VOC 2011 training set augmented with extra
segmentations provided in [10] and we test on a subset of the VOC 2011 validation set. We initialize
5
Oracle Loss
Oracle Accuracy
95
90
85
80
sMCL
MCL
Dey [5]
Indp.
sMCL
4
Indp.
2
0
1
2
3
4
5
6
0
2,500
Ensemble Size M
(b) Oracle Loss During Training (M = 4)
(a) Effect of Ensemble Size
airplaine
0.10%
99.60%
0.10%
0.20%
70.60%
automobile
0.20%
0.00%
99.80%
0.00%
bird
99.50%
0.10%
0.30%
0.10%
cat
0.10%
99.90%
0.00%
deer
37.60%
0.00%
dog
0.10%
frog
5,000
Iterations
0.10%
29.20%
0.10%
28.50%
39.90%
31.60%
0.00%
22.60%
33.20%
25.20%
19.00%
0.00%
0.00%
22.20%
77.80%
36.20%
0.00%
38.10%
25.70%
30.30%
20.30%
26.10%
23.30%
0.00%
78.80%
19.30%
1.90%
27.90%
47.60%
0.00%
24.50%
19.70%
27.70%
26.30%
26.30%
0.00%
0.00%
0.00%
62.90%
37.10%
0.00%
71.30%
20.50%
8.20%
26.30%
26.40%
24.30%
23.00%
62.40%
0.00%
55.80%
0.00%
44.20%
0.00%
24.90%
57.40%
0.00%
17.70%
20.00%
23.60%
31.70%
24.70%
0.00%
0.00%
99.90%
63.30%
0.10%
0.00%
36.60%
61.20%
0.00%
24.00%
14.80%
29.30%
21.40%
27.90%
21.40%
99.90%
0.10%
0.00%
0.00%
27.70%
72.30%
0.00%
0.00%
50.40%
0.00%
33.30%
16.30%
17.30%
18.30%
32.50%
31.90%
horse
0.00%
99.90%
0.00%
0.10%
38.90%
0.00%
60.10%
1.00%
23.20%
0.00%
49.40%
27.40%
26.30%
26.80%
22.60%
24.30%
ship
0.00%
0.00%
100.00%
0.00%
0.00%
80.00%
0.00%
20.00%
0.10%
57.60%
11.60%
30.70%
25.30%
22.70%
24.40%
27.60%
truck
0.00%
0.00%
0.20%
99.80%
68.40%
0.00%
31.40%
0.20%
35.40%
0.00%
28.00%
36.60%
23.80%
20.60%
27.10%
28.50%
0
1
2
3
0
1
2
3
0
1
2
3
0
1
2
3
(c) k=1
(d) k=2
(e) k=3
(f) k=M=4
Figure 4: sMCL trained ensembles produce higher oracle accuracies than baselines (a) by directly optimizing
the oracle loss (b). By varying the number of predictors k each example can be assigned to, we can interpolate
between sMCL and standard ensembles, and (c-f) show the percentage of test examples of each class assigned
to each ensemble member by the oracle for various k. These divisions are not preselected and show how
specialization is an emergent property of sMCL training.
our sMCL models from a standard ensemble trained for 50 epochs at a learning rate of 10?3 . The
sMCL ensemble is then fine-tuned for another 15 epochs at a reduced learning rate of 10?5 .
Results. Figure 5a shows oracle accuracy (class-averaged IoU) for all methods with ensemble sizes
ranging from 1 to 6. Again, sMCL significantly outperforms all baselines (~7% relative improvement
over classical ensembles). In this more complex setting, we see the method of Dey et al. [5] saturates
more quickly ? resulting in performance worse than classical ensembles as ensemble size grows.
Though we expect MCL to achieve similar results as sMCL, retraining the MCL ensembles a sufficient
number of times proved infeasible so results after five meta-iterations are shown.
Oracle Mean IoU
Interpretable Expertise: sMCL as Segmentation Specialists. In Figure 5b, we analyze the class
distribution of the predictions using an sMCL ensemble with 4 members. For each test sample, the
oracle picks the prediction which corresponds to the ensemble member with the highest accuracy
for that sample. We find the specialization with respect to classes is much less evident than in the
classification experiments. As segmentation presents challenges other than simply selecting the
correct class, specialization can occur in terms of shape and frequency of predicted segments in
addition to class divisions; however, we do still see some class biases ? network 2 captures cows,
tables, and sofas well and network 4 has become an expert on sheep and horses.
Figure 6 shows qualitative results from a four member sMCL ensemble. We can clearly observe
the diversity in the segmentations predicted by different members. In the first row, we see the
majority of the ensemble members produce dining tables of various completeness in response to the
visual uncertainty caused by the clutter. Networks 2 and 3 capture this ambiguity well, producing
segmentations with the dining table completely present or absent. Row 2 demonstrates the capacity
of sMCL ensembles to provide multiple high quality solutions. The models are confused whether the
75
sMCL
MCL
Dey [5]
Indp.
Net ?1
Net ?2
70
Net ?3
65
Net ?4
60
1
2
3
4
5
6
Ensemble Size M
(a) Effect of Ensemble Size
(b) Oracle Assignment Distributions by Class
Figure 5: a) sMCL trained ensembles consistently result in improved oracle mean IoU over baselines on PASCAL
VOC 2011. b) Distribution of examples from each category assigned by the oracle for an sMCL ensemble.
6
Independent
Ensemble Oracle
sMCL Ensemble Predictions
IoU 82.64
IoU 77.11
IoU 88.12
IoU 58.70
IoU 52.78
IoU 54.26
IoU 56.45
IoU 62.03
IoU 47.68
IoU 37.73
IoU 20.31
IoU 21.34
IoU 14.17
IoU 94.55
IoU 19.18
Net 1
Net 2
Net 3
Net 4
Input
Figure 6: Samples images and corresponding predictions obtained by each member of the sMCL ensemble as
well as the top output of a classical ensemble. The output with minimum loss on each example is outlined in red.
Notice that sMCL ensembles vary in the shape, class, and frequency of predicted segments.
animal is a horse or a cow ? models 1 and 3 produce typical ?safe? responses while models 2 and 4
attempt to give cohesive responses. Finally, row 3 shows how the models can learn biases about the
frequency of segments with model 3 presenting only the sheep.
4.3 Image Captioning
In this section, we show that sMCL trained ensembles can produce sets of high quality and diverse
sentences, which is essential to improving recall and capturing ambiguities in language and perception.
Model. We adopt the model and training procedure of Karpathy et al. [14], utilizing their publicly
available implementation neuraltalk2. The model consists of an VGG16 network [4] which encodes
the input image as a fixed-length representation for a Long Short-Term Memory (LSTM) language
model. We train and test on the MSCOCO dataset [18], using the same splits as [14]. We perform two
experimental setups by either freezing or finetuning the CNN. In the first, we freeze the parameters
of the CNN and train multiple LSTM models using the CNN as a static feature generator. In the
second, we aggregate and back-propagate the gradients from each LSTM model through the CNN in
a tree-like model structure. This is largely a construct of memory restrictions as our hardware could
not accommodate multiple VGG16 networks. We train each ensemble for 70k iterations with the
parameters of the CNN fixed. For the fine-tuning experiments, we perform another 70k iterations of
training to fine-tune the CNN. We generate sentences for testing by performing beam search with a
beam width of two (following [14]).
Results. Table 1 presents the oracle CIDEr-D [28] scores for all methods on the validation set. We
additionally compare with all outputs of a beam search over a single CNN+LSTM model with beam
width ranging from 1 to 5. sMCL significantly outperforms the baseline ensemble learning methods
(shown in the upper section of the table), increasing both oracle performance and the number of
unique n-grams. For M = 5, beam search from a single model achieves greater oracle but produces
significantly fewer unique n-grams. We note that beam search is an inference method and increased
beam width could provide similar benefits for sMCL ensembles.
Oracle CIDEr-D for Ensemble of Size
# Unique n-Grams (M=5)
M=1
2
3
4
5
n=1
2
3
4
Avg.
Length
sMCL
MCL [8]
Dey [5]
Indp.
0.684
0.822
0.752
0.798
0.757
0.862
0.81
0.850
0.784
0.911
0.823
0.887
0.809
0.922
0.852
0.910
0.831
713
384
584
540
2902
1565
2266
2003
6464
3586
4969
4312
15427
9551
12208
10297
10.21
9.87
10.26
10.24
sMCL (fine-tuned CNN)
Indp. (fine-tuned CNN)
0.912
1.064
1.001
1.130
1.05
1.179
1.073
1.184
1.095
1135
921
6028
4335
15184
10534
35518
23811
10.43
10.33
Beam Search
0.654
0.754
0.833
0.888
0.943
580
2272
4920
12920
10.62
Table 1: sMCL base methods outperform other ensemble methods a captioning, improve both oracle performance
and the number of distinct n-grams. For low M, sMCL also performs better than multiple-output decoders.
7
Input
Independently Trained Networks
sMCL Ensemble
A man riding a wave on top of a surfboard.
A man riding a wave on top of a surfboard.
A man riding a wave on top of a surfboard.
A man riding a wave on top of a surfboard.
A man riding a wave on top of a surfboard.
A person on a surfboard in the water.
A surfer is riding a wave in the ocean.
A surfer riding a wave in the ocean.
A group of people standing on a sidewalk.
A man is standing in the middle of the street.
A group of people standing around a fire hydrant.
A group of people standing around a fire hydrant
A man is walking down the street with an umbrell.
A group of people sitting at a table with umbrellas.
A group of people standing around a large plane.
A group of people standing in front of a building
A kitchen with a stove and a microwave.
A white refrigerator freezer sitting inside of a kitchen.
A white refrigerator sitting next to a window.
A white refrigerator freezer sitting in a kitchen
A cat sitting on a chair in a living room.
A kitchen with a stove and a sink.
A cat is sitting on top of a refrigerator.
A cat sitting on top of a wooden table
A bird is sitting on a tree branch.
A bird is perched on a branch in a tree.
A bird is perched on a branch in a tree.
A bird is sitting on a tree branch
A small bird perched on top of a tree branch.
A couple of birds that are standing in the grass.
A bird perched on top of a branch.
A bird perched on a tree branch in the sky
Figure 7: Comparison of sentences generated by members of a standard independently trained ensemble and an
sMCL based ensemble of size four.
Intepretable Expertise: sMCL as N-Gram Specialists. Figure 7 shows example images and generated captions from standard and sMCL ensembles of size four (results from beam search over a
single model are similar). It is evident that the independently trained models tend to predict similar
sentences independent of initialization, perhaps owing to the highly structured nature of the output
space and the mode bias of the underlying language model. On the other hand, the sMCL based
ensemble generates diverse sentences which capture ambiguity both in language and perception. The
first row shows an extreme case in which all of the members of the standard ensemble predict identical
sentences. In contrast, the sMCL ensemble produces sentences that describe the scene with many
different structures. In row three, both models are confused about the content of the image, mistaking
the pile of suitcases as kitchen appliances. However, the sMCL ensemble widens the scope of some
sentences to include the cat clearly depicted in the image. The fourth row is an example of regression
towards the mode, with the standard model producing multiple similar sentences describing birds on
branches. In the sMCL ensemble, we also see this tendency; however, one model breaks away and
captures the true content of the image.
5
Conclusion
To summarize, we propose Stochastic Multiple Choice Learning (sMCL), an SGD-based technique
for training diverse deep ensembles that follows a ?winner-take-gradient? training strategy. Our
experiments demonstrate the broad applicability and efficacy of sMCL for training diverse deep
ensembles. In all experimental settings, sMCL significantly outperforms classical ensembles and
other strong baselines including the 5x slower MCL procedure. Our analysis shows that exactly the
same algorithm (sMCL) automatically generates specializations among ensemble members along
different task-specific dimensions. sMCL is simple to implement, agnostic to both architecture and
loss function, parameter free, and simply involves introducing one new sMCL layer into existing
ensemble architectures.
Acknowledgments
This work was supported in part by a National Science Foundation CAREER award, an Army Research Office YIP
award, ICTAS Junior Faculty award, Office of Naval Research grant N00014-14-1-0679, Google Faculty Research
award, AWS in Education Research grant, and NVIDIA GPU donation, all awarded to DB, and by an NSF CAREER
award (IIS-1253549), the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory
contract FA8650-12-C-7212, a Google Faculty Research award, and an NVIDIA GPU donation, all awarded to DC.
Computing resources used by this work are supported in part by NSF (ACI-0910812 and CNS-0521433), the Lily
Endowment, Inc., and the Indiana METACyt Initiative. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and
conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or implied, of IARPA, AFRL, NSF, or the U.S. Government.
8
References
[1] CIFAR-10 Quick Network Tutorial. http://caffe.berkeleyvision.org/gathered/examples/cifar10.html, 2016.
[2] K. Ahmed, M. H. Baig, and L. Torresani. Network of experts for large-scale image categorization. In
arXiv preprint arXiv:1604.06119, 2016.
[3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-Best Solutions in Markov
Random Fields. In Proceedings of European Conference on Computer Vision (ECCV), 2012.
[4] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep
into convolutional nets. arXiv preprint arXiv:1405.3531, 2014.
[5] D. Dey, V. Ramakrishna, M. Hebert, and J. Andrew Bagnell. Predicting multiple structured visual
interpretations. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2015.
The
[6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2011/workshop/index.html.
[7] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets Robotics: The KITTI Dataset. International
Journal of Robotics Research (IJRR), 2013.
[8] A. Guzman-Rivera, D. Batra, and P. Kohli. Multiple Choice Learning: Learning to Produce Multiple
Structured Outputs. In Advances in Neural Information Processing Systems (NIPS), 2012.
[9] A. Guzman-Rivera, P. Kohli, D. Batra, and R. Rutenbar. Efficiently enforcing diversity in multi-output
structured prediction. In Proceedings of the International Conference on Artificial Intelligence and
Statistics (AISTATS), 2014.
[10] B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In
Proceedings of IEEE International Conference on Computer Vision (ICCV), 2011.
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint
arXiv:1512.03385, 2015.
[12] G. E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In Advances in Neural
Information Processing Systems (NIPS) - Deep Learning Workshop, 2014.
[13] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe.
berkeleyvision.org/, 2013.
[14] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[15] A. Kirillov, B. Savchynskyy, D. Schlesinger, D. Vetrov, and C. Rother. Inferring m-best diverse solutions
in a single one. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2015.
[16] A. Kirillov, D. Schlesinger, D. Vetrov, C. Rother, and B. Savchynskyy. M-best-diverse labelings for
submodular energies and beyond. In Advances in Neural Information Processing Systems (NIPS), 2015.
[17] A. Krizhevsky. Learning multiple layers of features from tiny images, 2009.
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
COCO: Common objects in context, 2014.
[19] Y. Liu and X. Yao. Ensemble learning via negative correlation. Neural Networks, 12(10):1399?1404, 1999.
[20] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[21] P. Melville and R. J. Mooney. Creating diversity in ensembles using artificial data. Information Fusion,
6(1):99?111, 2005.
[22] Microsoft. Decades of computer vision research, one ?Swiss Army knife?. blogs.microsoft.com
/next/2016/03/30/decades-of-computer-vision-research-one-swiss-army-knife/, 2016.
[23] D. Park and D. Ramanan. N-best maximal decoders for part models. In Proceedings of IEEE International
Conference on Computer Vision (ICCV), pages 2627?2634, 2011.
[24] A. Prasad, S. Jegelka, and D. Batra. Submodular meets structured: Finding diverse subsets in exponentiallylarge structured item sets. In Advances in Neural Information Processing Systems (NIPS), 2014.
[25] O. Russakovsky, J. Deng, J. Krause, A. Berg, and L. Fei-Fei. The ImageNet Large Scale Visual Recognition
Challenge 2012 (ILSVRC2012). http://www.image-net.org/challenges/LSVRC/2012/.
[26] A. Strehl and J. Ghosh. Cluster ensembles?a knowledge reuse framework for combining multiple
partitions. The Journal of Machine Learning Research, 3:583?617, 2003.
[27] K. Tumer and J. Ghosh. Error correlation and error reduction in ensemble classifiers. Connection Science,
8(3-4):385?404, 1996.
[28] R. Vedantam, C. Lawrence Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
[29] C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating visual representations from unlabeled video. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 98?106, 2016.
[30] WIRED. Facebook?s AI Can Caption Photos for the Blind on Its Own. wired.com/2015/10/facebookartificial-intelligence-describes-photo-captions-for-blind-people/, 2015.
9
| 6270 |@word kohli:2 cnn:14 faculty:3 version:1 middle:1 retraining:5 everingham:1 open:1 surfboard:6 seek:1 propagate:1 prasad:1 pick:2 sgd:4 rivera:5 accommodate:1 reduction:1 initial:1 liu:1 efficacy:3 score:3 selecting:1 tuned:3 past:1 existing:8 outperforms:5 com:2 assigning:1 gpu:2 neuraltalk2:1 distant:1 partition:2 informative:1 shape:2 designed:2 interpretable:5 update:1 drop:3 grass:2 resampling:1 intelligence:3 discovering:1 guess:2 fewer:1 item:1 plane:1 yi1:1 short:1 tumer:1 completeness:1 boosting:2 appliance:1 org:4 zhang:1 five:1 along:1 direct:1 become:2 initiative:1 qualitative:1 specialize:2 consists:1 combine:1 inside:1 manner:1 introduce:2 expected:6 behavior:3 frequently:1 multi:4 voc:5 decreasing:1 automatically:3 actual:1 window:1 solver:2 increasing:2 spain:1 begin:1 notation:1 formalizing:1 bounded:1 agnostic:4 provided:2 lowest:5 confused:2 underlying:1 argmin:1 interpreted:1 minimizes:3 substantially:2 generalizable:1 unified:1 differing:1 indiana:3 finding:1 ghosh:2 guarantee:1 sky:2 tie:1 exactly:2 demonstrates:1 classifier:1 ramanan:2 grant:2 producing:7 arguably:1 segmenting:1 thereon:1 consequence:1 despite:1 vetrov:2 meet:2 perched:7 approximately:1 might:2 bird:14 initialization:2 frog:1 mistaking:1 range:3 subfields:1 averaged:1 practical:3 unique:3 acknowledgment:1 testing:1 union:2 practice:1 implement:2 block:4 swiss:2 procedure:2 maire:1 area:1 rnn:3 significantly:5 vedaldi:1 word:1 pre:1 regular:1 savchynskyy:2 selection:1 operator:1 unlabeled:1 context:4 optimize:1 restriction:1 map:1 quick:2 ranjan:1 www:2 dean:1 straightforward:1 williams:1 independently:3 utilizing:1 importantly:1 baig:1 datapoints:1 embedding:1 notion:1 autonomous:1 coordinate:1 limiting:1 user:2 caption:5 exact:2 hypothesis:2 recognition:6 walking:1 fork:1 preprint:3 capture:5 refrigerator:4 ilsvrc2012:1 sun:1 highest:4 broken:1 freezer:2 occluded:1 trained:17 tight:1 segment:3 shakhnarovich:1 division:4 f2:1 learner:12 completely:1 sink:1 joint:3 finetuning:1 emergent:2 various:3 cat:5 maji:1 train:11 forced:1 distinct:1 shortcoming:2 describe:1 fast:1 artificial:3 crandall:1 labeling:2 horse:5 deer:1 choosing:1 aggregate:1 caffe:4 hydrant:2 larger:2 plausible:1 cvpr:3 otherwise:1 melville:1 statistic:2 simonyan:1 breed:1 jointly:1 itself:1 emergence:1 differentiable:2 net:10 dining:2 propose:3 interaction:1 maximal:1 combining:1 poorly:1 achieve:1 adapts:1 description:2 inducing:1 convergence:4 impaired:1 darrell:1 cluster:1 wired:2 captioning:9 produce:16 generating:3 categorization:1 object:5 stiller:1 donation:2 andrew:2 recurrent:1 pose:1 kitti:1 bourdev:1 eq:2 strong:1 coverage:1 predicted:7 involves:1 distilling:1 iou:21 safe:3 correct:5 owing:1 stochastic:14 human:3 intepretable:1 education:1 require:2 government:2 fix:2 f1:3 im:2 around:3 dhruv:1 ground:1 visually:1 lawrence:1 mapping:3 predict:3 week:2 surfer:2 scope:1 achieves:2 vary:1 adopt:1 torralba:1 purpose:1 lenz:1 applicable:4 sofa:1 label:4 stefan:1 minimization:1 suitcase:1 clearly:3 concurrently:2 cider:4 rather:1 varying:2 office:2 focus:1 june:1 naval:1 improvement:2 consistently:1 seamlessly:1 tech:4 contrast:5 baseline:9 wooden:1 inference:1 dependent:5 typically:1 eaten:1 mth:1 perona:1 reproduce:1 labelings:1 selects:1 classification:12 among:2 pascal:4 html:2 animal:1 constrained:1 yip:1 initialize:2 marginal:3 field:1 construct:1 once:3 having:1 identical:1 broad:3 park:1 nearly:1 fcn:3 future:1 minimized:1 report:1 guzman:5 torresani:1 serious:1 modern:3 national:1 interpolate:1 individual:1 mcl:18 kitchen:5 cns:1 fire:2 microsoft:3 attempt:1 highly:2 possibility:1 evaluation:4 sheep:2 alignment:1 mixture:1 extreme:2 copyright:1 allocating:1 microwave:1 capable:1 partial:1 cifar10:5 tree:9 initialized:1 re:1 sacrificing:1 schlesinger:2 minimal:1 instance:1 classify:1 increased:1 cover:1 assignment:5 applicability:3 introducing:2 subset:3 predictor:9 uniform:1 recognizing:1 krizhevsky:1 virginia:4 front:1 answer:1 combined:1 person:1 density:1 explores:1 lstm:4 international:6 standing:8 lee:1 probabilistic:1 contract:1 michael:1 together:1 quickly:3 yao:1 again:1 ambiguity:7 reflect:1 worse:2 creating:1 expert:4 derivative:1 return:1 distribute:1 lily:1 diversity:5 inc:1 satisfy:1 explicitly:2 ranking:1 caused:1 blind:2 vehicle:1 view:2 break:1 analyze:1 red:1 wave:7 capability:1 parallel:1 annotation:1 jia:1 contribution:1 minimize:5 air:1 hariharan:1 accuracy:10 convolutional:8 publicly:1 largely:1 likewise:1 ensemble:101 sitting:9 identify:1 gathered:1 efficiently:1 generalize:2 produced:1 ren:1 expertise:5 russakovsky:1 mooney:1 minm:1 datapoint:1 detector:1 inform:1 decorrelated:1 facebook:1 against:1 energy:1 frequency:3 static:1 couple:2 gain:2 dataset:7 proved:1 recall:1 conversation:1 knowledge:2 segmentation:17 formalize:1 schedule:1 anticipating:1 reflecting:1 back:1 afrl:1 higher:2 supervised:1 follow:1 methodology:1 modal:3 response:3 improved:1 zisserman:2 formulation:3 though:1 dey:9 implicit:2 correlation:4 hand:2 freezing:1 reweighting:1 google:2 mode:4 quality:3 perhaps:1 smcl:65 grows:3 riding:7 building:1 effect:2 umbrella:1 pascalnetwork:1 true:2 assigned:5 alternating:2 laboratory:1 semantic:9 white:3 cohesive:1 during:3 width:3 encourages:2 ambiguous:1 berkeleyvision:2 criterion:1 presenting:1 evident:2 demonstrate:3 confusion:2 performs:3 vondrick:2 image:27 wise:1 ranging:2 novel:2 recently:2 discovers:1 parikh:1 common:1 winner:1 interpretation:2 he:1 mellon:1 refer:1 freeze:1 ai:2 tuning:2 automatic:1 outlined:1 similarly:1 submodular:3 language:5 interleaf:1 base:2 own:1 optimizing:2 awarded:2 ship:1 scenario:1 coco:1 certain:1 n00014:1 nvidia:2 meta:3 hay:1 arbitrarily:1 blog:1 vt:4 yi:9 accomplished:1 minimum:2 additional:2 greater:1 deng:1 paradigm:2 maximize:2 living:1 vgg16:2 branch:11 multiple:34 ii:1 technical:1 faster:2 ahmed:2 cross:1 long:4 cifar:1 lin:1 divided:1 knife:2 award:6 loosen:1 prediction:18 regression:1 vision:14 cmu:1 metric:4 arxiv:6 iteration:8 achieved:1 robotics:2 beam:9 addition:1 fine:6 separately:1 winn:1 krause:1 aws:1 source:1 extra:1 unlike:2 tend:1 db:1 member:24 effectiveness:1 call:2 extracting:1 split:1 enough:1 cogswell:2 variety:1 architecture:13 fm:10 cow:4 reduce:1 idea:1 absent:1 whether:2 specialization:9 chatfield:1 assist:1 reuse:1 effort:1 penalty:1 reformulated:1 fa8650:1 cause:1 reassign:1 deep:22 useful:2 detailed:1 tune:1 dbatra:1 karpathy:3 clutter:1 desk:1 backpropagated:2 extensively:1 ten:1 induces:1 category:1 hardware:1 reduced:1 generate:2 http:4 outperform:1 exist:2 percentage:1 nsf:3 tutorial:1 notice:2 governmental:1 per:1 diverse:17 carnegie:1 group:6 four:3 demonstrating:1 yadollahpour:1 backward:1 downstream:3 inverse:1 uncertainty:1 respond:1 striking:1 fourth:1 reasonable:2 geiger:1 endorsement:1 comparable:1 capturing:2 bound:1 layer:2 pay:1 stove:2 replaces:1 truck:1 oracle:53 activity:1 occur:1 constraint:1 fei:4 scene:1 encodes:1 generates:2 generalist:1 min:3 chair:1 performing:1 speedup:1 structured:9 alternate:2 project:1 vacuum:1 belonging:1 disclaimer:1 beneficial:1 describes:1 em:1 making:4 iccv:4 resource:1 describing:2 mechanism:2 know:2 photo:2 available:1 kirillov:2 doll:1 observe:1 sidewalk:1 away:1 generic:2 ocean:2 batch:3 specialist:3 slower:1 cake:1 top:12 clustering:1 include:4 widens:1 calculating:1 especially:1 classical:7 seeking:1 objective:4 implied:1 malik:1 strategy:2 costly:3 primary:1 traditional:1 bagnell:1 exhibit:1 gradient:13 lends:1 arbelaez:1 capacity:2 majority:2 decoder:2 street:2 topic:1 consensus:1 urtasun:1 water:1 enforcing:1 rother:2 length:2 index:1 providing:1 minimizing:5 setup:1 potentially:1 negative:1 design:1 implementation:1 policy:1 perform:2 upper:2 markov:1 descent:8 situation:1 hinton:2 saturates:1 frame:1 dc:1 arbitrary:1 david:1 pair:3 dog:2 junior:1 extensive:1 sentence:11 imagenet:2 rutenbar:1 connection:1 learned:2 herein:1 barcelona:1 nip:5 address:2 able:1 beyond:1 perception:5 pattern:4 challenge:5 convincingly:1 summarize:1 preselected:1 pirsiavash:1 including:2 video:2 belief:2 memory:2 unrealistic:1 gool:1 natural:1 force:1 predicting:1 indicator:1 residual:1 advanced:1 representing:1 improve:3 ijrr:1 reprint:1 epoch:3 relative:1 embedded:2 loss:40 fully:3 expect:1 generation:1 ramakrishna:1 generator:1 validation:2 foundation:1 shelhamer:1 degree:1 jegelka:1 verification:1 sufficient:1 tiny:1 pi:8 strehl:1 endowment:1 eccv:1 row:8 lo:2 penalized:1 pile:1 repeat:1 supported:2 free:4 arriving:1 infeasible:1 hebert:1 bias:4 allow:1 wide:2 face:2 emerge:2 benefit:1 van:1 overcome:1 dimension:1 evaluating:1 world:1 avoids:1 transition:1 gram:5 author:2 qualitatively:1 avg:1 regressors:1 contour:1 abstracted:1 sequentially:2 belongie:1 vedantam:1 xi:11 don:1 aci:1 search:7 continuous:1 iterative:1 decade:2 table:9 additionally:1 learn:4 nature:1 delving:1 career:2 improving:1 alg:2 automobile:1 complex:1 necessarily:1 european:1 domain:2 official:1 voc2011:2 aistats:1 zitnick:2 definitive:1 arise:1 iarpa:2 body:1 augmented:1 fashion:1 mscoco:1 momentum:1 inferring:1 winning:1 grained:1 saturate:1 down:1 specific:1 decay:1 evidence:1 fusion:1 essential:1 workshop:2 quantization:1 sequential:1 effectively:1 notwithstanding:1 illustrates:1 gap:1 authorized:1 suited:2 entropy:1 intersection:1 depicted:1 simply:3 likely:2 explore:1 army:3 visual:6 vinyals:1 expressed:1 contained:1 collectively:1 corresponds:1 truth:1 goal:4 viewed:1 month:1 careful:1 towards:2 room:1 man:7 content:4 hard:1 feasible:1 lsvrc:1 typical:2 reducing:2 batra:5 called:1 pas:2 tendency:2 experimental:2 rarely:1 formally:1 berg:1 people:7 devil:1 evaluate:1 |
5,826 | 6,271 | Global Optimality of Local Search
for Low Rank Matrix Recovery
Srinadh Bhojanapalli
[email protected]
Behnam Neyshabur
[email protected]
Nathan Srebro
[email protected]
Toyota Technological Institute at Chicago
Abstract
We show that there are no spurious local minima in the non-convex factorized
parametrization of low-rank matrix recovery from incoherent linear measurements.
With noisy measurements we show all local minima are very close to a global
optimum. Together with a curvature bound at saddle points, this yields a polynomial
time global convergence guarantee for stochastic gradient descent from random
initialization.
1
Introduction
Low rank matrix recovery problem is heavily studied and has numerous applications in collaborative
filtering, quantum state tomography, clustering, community detection, metric learning and multi-task
learning [21, 12, 9, 27].
We consider the ?matrix sensing? problem of recovering a low-rank (or approximately low rank)
p.s.d. matrix1 X ? 2 Rn?n , given a linear measurement operator A : Rn?n ! Rm and noisy
measurements y = A(X ? ) + w, where w is an i.i.d. noise vector. An estimator for X ? is given by
the rank-constrained, non-convex problem
minimize
X:rank(X)?r
kA(X)
yk2 .
(1)
This matrix sensing problem has received considerable attention recently [30, 29, 26]. This and other
rank-constrained problems are common in machine learning and related fields, and have been used
for applications discussed above. A typical theoretical approach to low-rank problems, including (1)
is to relax the low-rank constraint to a convex constraint, such as the trace-norm of X. Indeed, for
matrix sensing, Recht et al. [20] showed that if the measurements are noiseless and the measurement
operator A satisfies a restricted isometry property, then a low-rank X ? can be recovered as the unique
solution to a convex relaxation of (1). Subsequent work established similar guarantees also for the
noisy and approximate case [14, 6].
However, convex relaxations to the rank are not the common approach employed in practice. In this
and other low-rank problems, the method of choice is typically unconstrained local optimization (via
e.g. gradient descent, SGD or alternating minimization) on the factorized parametrization
minimize f (U ) = kA(U U > )
U 2Rn?r
yk2 ,
(2)
1
We study the case where X ? is PSD. We believe the techniques developed here can be used to extend
results to the general case.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
where the rank constraint is enforced by limiting the dimensionality of U . Problem (2) is a nonconvex optimization problem that could have many bad local minima (as we show in Section 5), as
well as saddle points. Nevertheless, local optimization seems to work very well in practice. Working
on (2) is much cheaper computationally and allows scaling to large-sized problems?the number
of optimization variables is only O(nr) rather than O(n2 ), and the updates are usually very cheap,
especially compared to typical methods for solving the SDP resulting from the convex relaxation.
There is therefore a significant disconnect between the theoretically studied and analyzed methods
(based on convex relaxations) and the methods actually used in practice.
Recent attempts at bridging this gap showed that, some form of global ?initialization?, typically
relying on singular value decomposition, yields a solution that is already close enough to X ? ; that
local optimization from this initializer gets to the global optima (or to a good enough solution). Jain
et al. [15], Keshavan [17] proved convergence for alternating minimization algorithm provided the
starting point is close to the optimum, while Zheng and Lafferty [30], Zhao et al. [29], Tu et al.
[26], Chen and Wainwright [8], Bhojanapalli et al. [2] considered gradient descent methods on the
factor space and proved local convergence. But all these studies rely on global initialization followed
by local convergence, and do not tackle the question of the existence of spurious local minima or deal
with optimization starting from random initialization. There is therefore still a disconnect between
this theory and the empirical practice of starting from random initialization and relying only on the
local search to find the global optimum.
In this paper we show that, under a suitable incoherence condition on the measurement operator A
(defined in Section 2), with noiseless measurements and with rank(X ? ) ? r, the problem (2) has no
spurious local minima (i.e. all local minima are global and satisfy X ? = U U > ). Furthermore, under
the same conditions, all saddle points have a direction with significant negative curvature, and so
using a recent result of Ge et al. [10] we can establish that stochastic gradient descent from random
initialization converges to X ? in polynomial number of iterations. We extend the results also to
the noisy and approximately-low-rank settings, where we can guarantee that every local minima is
close to a global minimum. The incoherence condition we require is weaker than conditions used
to establish recovery through local search, and so our results also ensures recovery in polynomial
time under milder conditions than what was previously known. In particular, with i.i.d. Gaussian
measurements, we ensure no spurious local minima and recovery through local search with the
optimal number O(nr) of measurements.
Related Work Our work is heavily inspired by Bandeira et al. [1], who recently showed similar
behavior for the problem of community detection?this corresponds to a specific rank-1 problem with
a linear objective, elliptope constraints and a binary solution. Here we take their ideas, extend them
and apply them to matrix sensing with general rank-r matrices. In the past several months, similar
type of results were also obtained for other non-convex problems (where the source of non-convexity
is not a rank constraint), specifically complete dictionary learning [24] and phase recovery [25]. A
related recent result of a somewhat different nature pertains to rank unconstrained linear optimization
on the elliptope, showing that local minima of the rank-constrained problem approximate well the
global optimum of the rank unconstrained convex problem, even though they might not be the global
minima (in fact, the approximation guarantee for the actual global optimum is better) [18].
Another non-convex low-rank problem long known to not possess spurious local minima is the
PCA problem, which can also be phrased as matrix approximation with full observations, namely
minrank(X)?r kA XkF (e.g. [23]). Indeed, local search methods such as the power-method
are routinely used for this problem. Recently local optimization methods for the PCA problem
working more directly on the optimized formulation have also been studied, including SGD [22]
and Grassmannian optimization [28]. These results are somewhat orthogonal to ours, as they study
a setting in which it is well known there are never any spurious local minima, and the challenge is
obtaining satisfying convergence rates.
The seminal work of Burer and Monteiro [3]pproposed low-rank factorized optimization for SDPs,
and showed that for extremely high rank r > m (number of constraints), an Augmented Lagrangian
method converges asymptotically to the optimum. It was also shown that (under mild conditions)
any rank deficient local minima is a global minima [4, 16], providing a post-hoc verifiable sufficient
condition for global optimality. However, this does not establish any a-priori condition, based on
problem structure, implying the lack of spurious local minima.
2
While preparing this manuscript, we also became aware of parallel work [11] studying the same
question for the related but different problem of matrix completion. For this problem they obtain
a similar guarantee, though with suboptimal dependence on the incoherence parameters and so
suboptimal sample complexity, and requiring adding a specific non-standard regularizer to the
objective?this is not needed for our matrix sensing results.
We believe our work, together with the parallel work of [11], are the first to establish the lack of
spurious local minima and the global convergence of local search from random initialization for a
non-trivial rank-constrained problem (beyond PCA with full observations) with rank r > 1.
Notation. For matrices X, Y 2 Rn?n , their inner product is hX, Y i = trace X > Y . We use
kXkF , kXk2 and kXk? for the Frobenius, spectral and nuclear norms of a matrix respectively.
Given a matrix X, we use i (X) to denote singular values of X in decreasing order. Xr =
arg minrank(Y )?r kX Y kF denotes the rank-r approximation of X, as obtained via its truncated
singular value decomposition. We use plain capitals R and Q to denote orthonormal matrices.
2
Formulation and Assumptions
We write the linear measurement operator A : Rn?n ! Rm as A(X)i = hAi , Xi where Ai 2
2
Rn?n , yielding yi = hAi , X ? i + wi , i = 1, ? ? ? , m. We assume wi ? N (0, w
) is i.i.d Gaussian
noise. We are generally interested in the high dimensional regime where the number of measurements
m is usually much smaller than the dimension n2 .
Even if we know that rank(X ? ) ? r, having many measurements might not be sufficient for recovery
if they are not ?spread out? enough. E.g., if all measurements only involve the first n/2 rows and
columns, we would never have any information on the bottom-right block. A sufficient condition for
identifiability of a low-rank X ? from linear measurements by Recht et al. [20] is based on restricted
isometry property defined below.
Definition 2.1 (Restricted Isometry Property). Measurement operator A : Rn?n ! Rm (with rows
Ai , i = 1, ? ? ? , m) satisfies (r, r ) RIP if for any n ? n matrix X with rank ? r,
m
(1
2
r )kXkF
1 X
2
?
hAi , Xi ? (1 +
m i=1
2
r )kXkF .
(3)
In particular, X ? of rank r is identifiable if 2r < 1 [see 20, Theorem 3.2]. One situation in which
RIP is obtained is for random measurement operators. For example, matrices with i.i.d. N (0, 1)
entries satisfy (r, r )-RIP when m = O( nr2 ) [see 6, Theorem 2.3]. This implies identifiability based
on i.i.d. Gaussian measurement with m = O(nr) measurements (coincidentally, the number of
degrees of freedom in X ? , optimal up to a constant factor).
3
Main Results
We are now ready to present our main result about local minima for the matrix sensing problem (2).
We first present the results for noisy sensing of exact low rank matrices, and then generalize the
results also to approximately low rank matrices.
Now we will present our result characterizing local minima of f (U ), for low-rank X ? . Recall that
2
measurements are y = A(X ? ) + w, where entries of w are i.i.d. Gaussian - wi ? N (0, w
).
?
2
Theorem 3.1. Consider the optimization problem (2) where y = A(X ) + w, w is i.i.d. N (0, w
),
1
10
?
A satisfies (4r, 4r )-RIP with 4r < 10 , and rank(X ) ? r. Then, with probability 1 n2 (over
the noise), for any local minimum U of f (U ):
r
log(n)
>
?
kU U
X kF ? 20
w.
m
In particular, in the noiseless case ( w = 0) we have U U > = X ? and so f (U ) = 0 and every local
minima is global. In the noiseless case, we can also relax the RIP requirement to 4r < 1/5 (see
Theorem 4.1 in Section 4). In the noisy case we cannot expect to ensure we always get to an exact
global minima, since the noise might cause tiny fluctuations very close to the global minima possibly
3
creating multiple very close local minima. But we show that all local minima are indeed very close
to some factorization U ? U ? > = X ? of the true signal, and hence to a global optimum, and this
?radius? of local minima decreases as we have more observations.
The proof of the Theorem for the noiseless case is presented in Section 4. The proof for the general
setting follows along the same lines and can be found in the Appendix.
So far we have discussed how all local minima are global, or at least very close to a global minimum.
Using a recent result by Ge et al. [10] on the convergence of SGD for non-convex functions, we
can further obtain a polynomial bound on the number of SGD iterations required to reach the global
minima. The main condition that needs to be established in order to ensure this, is that all saddle
points of (2) satisfy the ?strict saddle point condition?, i.e. have a direction with significant negative
curvature:
Theorem 3.2 (Strict saddle). Consider the optimization problem (2) in the noiseless case, where
1
y = A(X ? ), A satisfies (4r, 4r )-RIP with 4r < 10
, and rank(X ? ) ? r. Let U be a first order
>
?
critical point of f (U ) with U U 6= X . Then the smallest eigenvalue of the Hessian satisfies
?
1 2
2
?
r (f (U )) ?
min
r (X ).
m
5
Now consider the stochastic gradient descent updates,
U
+
= Projb
U
m
X
?
?
?
( Ai , U U >
yi )Ai U +
i=1
!!
,
(4)
where is uniformly distributed on the unit sphere and Projb is a projection onto kU kF ? b. Using
Theorem 3.2 and the result of Ge et al. [10] we can establish:
Theorem 3.3 (Convergence from random initialization). Consider the optimization problem (2)
?
under the same noiseless conditions as in Theorem
? 3.2. Using b kU kF , for some
? global optimum
U ? of f (U ), for any ?, c > 0, after T = poly
1
r (X
?)
,
1 (X
?
), b, 1? , log(1/c) iterations of (4)
with an appropriate stepsize ?, starting from a random point uniformly distributed on kU kF = b,
with probability at least 1 c, we reach an iterate UT satisfying
kUT
U ? kF ? ?.
The above result guarantees convergence of noisy gradient descent to a global optimum. Alternatively,
second order methods such as cubic regularization (Nesterov and Polyak [19]) and trust region (Cartis
et al. [7]) that have guarantees based on the strict saddle point property can also be used here.
RIP Requirement: Our results require (4r, 1/10)-RIP for the noisy case and (4r, 1/5)-RIP for the
noiseless case. Requiring (2r, 2r )-RIP with 2r < 1 is sufficient to ensure uniqueness of the global
optimum of (1), and thus recovery in the noiseless setting [20], but all known efficient recovery
methods require stricter conditions. The best guarantees we are aware of require (5r, 1/10)-RIP [20]
or (4r, 0.414)-RIP [6] using a convex relaxation. Alternatively, (6r, 1/10)-RIP is required for global
initialization followed by non-convex optimization [26]. In terms of requirements on (2r, 2r )-RIP
for non-convex methods, the best we are aware of is requiring 2r < ?(1/r) [15, 29, 30]?this is a
much stronger condition than ours, and it yields a suboptimal required number of spherical Gaussian
measurements of ?(nr3 ). So, compared to prior work our requirement is very mild?it ensures
efficient recovery, and requires the optimal number of spherical Gaussian measurements (up to a
constant factor) of O(nr).
Extension to Approximate Low Rank We can also obtain similar results that deteriorate gracefully
if X ? is not exactly low rank, but is close to being low-rank (see proof in the Appendix):
Theorem 3.4. Consider the optimization problem (2) where y = A(X ? ) and A satisfies (4r,
1
RIP with 4r < 100
, Then, for any local minima U of f (U ):
kU U >
X ? kF ? 4(kX ?
Xr? kF +
where Xr? is the best rank r approximation of X ? .
4
2r kX
?
Xr? k? ),
4r )-
20
18
18
16
16
14
14
12
10
6
6
4
4
30
16
40
m/n
Random
SVD
14
10
8
20
18
12
8
10
20
Rank
20
Rank
Rank
This theorem guarantees that any local optimum of f (U ) is close to X ? upto an error depending on
kX ? Xr? k. For the low-rank noiseless case we have X ? = Xr? and the right hand side vanishes.
When X ? is not exactly low rank, the best recovery error we can hope for is kX ? Xr? kF , since
U U > is at most rank k. On the right hand side of Theorem 3.4, we have also a nuclear norm term,
which might be higher, but it also gets scaled down by 2r , and so by the number of measurements.
12
10
8
6
4
2
10
20
30
m/n
40
5
10
15
20
25
30
35
40
m/n
Figure 1: The plots in this figure compare the success probability of gradient descent between
(left) random and (center) SVD initialization (suggested in [15]), for problem (2), with increasing
number of samples m and various values of rank r. Right most plot is the first m for a given r,
where the probability of success reaches the value 0.5. A run is considered success if kU U >
X ? kF /kX ? kF ? 1e 2. White cells denote success and black cells denote failure of recovery. We
set n to be 100. Measurements yi are inner product of entrywise i.i.d Gaussian matrix and a rank-r
p.s.d matrix with random subspace. We notice no significant difference between the two initialization
methods, suggesting absence of local minima as shown. Both methods have phase transition around
m = 2 ? n ? r.
4
Proof for the Noiseless Case
In this section we present the proof characterizing the local minima of problem (2). For ease of
exposition we first present the results for the noiseless case (w = 0). Proof for the general case can
be found in the Appendix.
Theorem 4.1. Consider the optimization problem (2) where y = A(X ? ), A satisfies (4r, 4r )-RIP
with 4r < 15 , and rank(X ? ) ? r. Then, for any local minimum U of f (U ):
U U > = X ?.
For the proof of this theorem we first discuss the implications of the first and second order optimality
conditions and then show how to combine them to yield the result.
Invariance of f (U ) over r ? r orthonormal matrices introduces additional challenges in comparing a
given stationary point to a global optimum. We have to find the best orthonormal matrix R to align a
given stationary point U to a global optimum U ? , where U ? U ? > = X ? , to combine results from
the first and second order conditions, without degrading the isometry constants.
Consider a local optimum U that satisfies first and second order optimality conditions of problem (2).
In particular U satisfies rf (U ) = 0 and z > r2 f (U )z 0 for any z 2 Rn?r . Now we will see how
these two conditions constrain the error U U > U ? U ? > .
First we present the following consequence of the RIP assumption [see 5, Lemma 2.1].
Lemma 4.1. Given two n ? n rank-r matrices X and Y , and a (4r, )-RIP measurement operator
A, the following holds:
m
1 X
hAi , Xi hAi , Y i hX, Y i ? kXkF kY kF .
(5)
m i=1
4.1
First order optimality
First we will consider the first order condition, rf (U ) = 0. For any stationary point U this implies
E
XD
Ai , U U > U ? U ? > Ai U = 0.
(6)
i
5
Now using the isometry property of Ai gives us the following result.
Lemma 4.2. [First order condition] For any first order stationary point U of f (U ), and A satisfying
the (4r, )-RIP (3), the following holds:
U ? U ? > )QQ> kF ?
k(U U >
UU>
U ?U ?>
F
,
where Q is an orthonormal matrix that spans the column space of U .
This lemma states that any stationary point of f (U ) is close to a global optimum U ? in the subspace
spanned by columns of U . Notice that the error along the orthogonal subspace Q? , kX ? Q? Q>
? kF
can still be large making the distance between X and X ? arbitrarily far.
Proof of Lemma 4.2. Let U = QR, for some orthonormal Q. Consider any matrix of the form
>
ZQR? 2 . The first order optimality condition then implies,
m D
X
U ?U ?>
Ai , U U >
i=1
E?
?
Ai , U R ? Q> Z > = 0
The above equation together with Restricted Isometry Property (equation (5)) gives us the following
inequality:
D
E
U U > U ? U ? > , QQ> Z > ? U U > U ? U ? >
QQ> Z > F .
?
?
?
?
F
Note that for any matrix A, A, QQ Z = QQ A, Z . Furthermore, for any matrix A,
sup{Z:kZkF ?1} hA, Zi = kAkF . Hence the above inequality implies the lemma statement.
4.2
>
>
Second order optimality
We now consider the second order condition to show that the error along Q? Q>
? is indeed bounded
well. Let r2 f (U ) be the hessian of the objective function. Note that this is an n ? r ? n ? r matrix.
Fortunately for our result we need to only evaluate the Hessian along vec(U U ? R) for some
orthonormal matrix R. Here vec(.) denotes writing a matrix in vector form.
Lemma 4.3. [Hessian computation] Let U be a first order critical point of f (U ). Then for any r ? r
orthonormal matrix R and j = ej e>
= U U ? R),
j (
r
X
j=1
vec (
j)
>
?
?
r2 f (U ) vec (
j)
=
m X
r
X
?
(
4 Ai , U
i=1 j=1
?
> 2
j
D
2 Ai , U U >
U ?U ?>
E2
),
Hence from second order optimality of U we get,
Corollary 4.1. [Second order optimality] Let U be a local minimum of f (U ) . For any r ? r
orthonormal matrix R,
r X
m
m
E2
X
?
?2
1 XD
4 Ai , U >
Ai , U U > U ? U ? > ,
(7)
j
2 i=1
j=1 i=1
Further for A satisfying (2r, ) -RIP (equation (3)) we have,
r
X
j=1
kU ej e>
j (U
1
kU U >
2(1 + )
U ? R)> k2F
U ? U ? > k2F .
(8)
The proof of this result follows simply by applying Lemma 4.3. The above Lemma gives a bound
on the distance in the factor (U ) space kU (U U ? R)> k2F . To be able to compare the second
order condition to the first order condition we need a relation between kU (U U ? R)> k2F and
kX X ? k2F . Towards this we show the following result.
2
R? is the pseudo inverse of R
6
Lemma 4.4. Let U and U ? be two n ? r matrices, and Q is an orthonormal matrix that spans the
column space of U . Then there exists an r ? r orthonormal matrix R such that for any first order
stationary point U of f (U ), the following holds:
r
X
j=1
kU ej e>
j (U
U ? R)> k2F ?
1
kU U >
8
U ? U ? > k2F +
34
k(U U >
8
U ? U ? > )QQ> k2F .
This Lemma bounds the distance in the factor space (k(U U ? R)U > k2F ) with kU U > U ? U ? > k2F
and k(U U > U ? U ? > )QQ> k2F . Combining this with the result from second order optimality (Corollary 4.1) shows kU U > U ? U ? > k2F is bounded by a constant factor of k(U U > U ? U ? > )QQ> k2F .
This implies kX ? Q? Q? kF is bounded, opposite to what the first order condition implied
(Lemma 4.2). The proof of the above lemma is in Section B. Hence from the above optimality
conditions we get the proof of Theorem 4.1.
Proof of Theorem 4.1. Assuming U U > 6= U ? U ? > , from Lemmas 4.2, 4.4 and Corollary 4.1 we
get,
?
?
2
1
1
34 2
kU U > U ? U ? > k2F ?
(U U > U ? U ? > ) .
2(1 + ) 8
8
F
the above inequality holds only if U U > = U ? U ? > .
If
?
5
Necessity of RIP
1
5
We showed that there are no spurious local minima only under a restricted isometry assumption.
A natural question is whether this is necessary, or whether perhaps the problem (2) never has any
spurious local minima, perhaps similarly to the non-convex PCA problem minU A U U > .
A good indication that this is not the case is that (2) is NP-hard, even in the noiseless case when
y = A(X ? ) for rank(X ? ) ? k [20] (if we don?t require RIP, we can have each Ai be non-zero on
a single entry in which case (2) becomes a matrix completion problem, for which hardness has been
shown even under fairly favorable conditions [13])3 . That is, we are unlikely to have a poly-time
algorithm that succeeds for any linear measurement operator. Although this doesn?t formally preclude
the possibility that there are no spurious local minima, but it just takes a very long time to find a local
minima, this scenario seems somewhat unlikely.
To resolve the question, we present an explicit example of a measurement operator A and y = A(X ? )
(i.e. f (X ? ) = 0), with rank(X ? ) = r, for which (1), and so also (2), have a non-global local minima.
Example 1: Let f (X) = (X11 + X?22
1) + (X11 1) + X12 + X21 and consider (1) with ?r = 1
1
0
0 0
(i.e. a rank-1 constraint). For X ? = 0 0 we have f (X ? ) = 0 and rank(X ? ) = 1. But X = 0 1
2
2
2
2
is a rank 1 local minimum with f (X) = 1.
Pr+2
We can be extended the construction to any rank r by simply adding i=3 (Xii 1)2 to the objective,
and padding both the global and local minimum with a diagonal beneath the leading 2 ? 2 block.
In Example 1, we had a rank-r problem, with a rank-r exact solution, and a rank-r local minima.
Another question we can ask is what happens if we allow a larger rank than the rank of the optimal
solution. That is, if we have f (X ? ) = 0 with low rank(X ? ), even rank(X ? ) = 1, but consider (1)
or (2) with a high r. Could we still have non-global local minima? The answer is yes...
P
2
Example 2: Let f (X) = (X11 + X22 + X33 1)2 + (X11 1)2 + (X22 X33 )2 + i,j:i6=j Xij
and consider the problem (1) with a rank r = 2 constraint. We can verify that X
2
?
2
3
1 0 0
= 40 0 0 5
0 0 0
3
0 0 0
is a rank=1 global minimum with f (X ? ) = 0, but X = 40 1/2 0 5 is a local minimum with
0 0 1/2
3
Note that matrix completion is tractable under incoherence assumptions, similar to RIP
7
f (X) = 1. Also for an arbitrary large rank constraint r > 1 (taking r to be odd for simplicity),
P(r 1)/2 ?
2
extend the objective to f (X)
=
(X
1)
+
(X11 + X2i,2i + X(2i+1),(2i+1) 1)2
11
i=1
?
2
+(X2i,2i X(2i+1),(2i+1) ) . We still have a rank-1 global minimum X ? with a single non-zero
?
entry X11
= 1, while X = (I X ? )/2 is a local minimum with f (X) = 1.
6
Conclusion
We established that under conditions similar to those required for convex relaxation recovery guarantees, the non-convex formulation of matrix sensing (2) does not exhibit any spurious local minima (or,
in the noisy and approximate settings, at least not outside some small radius around a global minima),
and we can obtain theoretical guarantees on the success of optimizing it using SGD from random
initialization. This matches the methods frequently used in practice, and can explain their success.
This guarantee is very different in nature from other recent work on non-convex optimization for
low-rank problems, which relied heavily on initialization to get close to the global optimum, and on
local search just for the final local convergence to the global optimum. We believe this is the first
result, together with the parallel work of Ge et al. [11], on the global convergence of local search for
common rank-constrained problems that are worst-case hard.
Our result suggests that SVD initialization is not necessary for global convergence, and random
initialization would succeed under similar conditions (in fact, our conditions are even weaker than in
previous work that used SVD initialization). To investigate empirically whether SVD initialization
is indeed helpful for ensuring global convergence, in Figure 1 we compare recovery probability
of random rank-k matrices for random and SVD initialization?there is no significant difference
between the two.
Beyond the implications for matrix sensing, we are hoping these type of results could be a first step
and serve as a model for understanding local search in deep networks. Matrix factorization, such as
in (2), is a depth-two neural network with linear transfer?an extremely simple network, but already
non-convex and arguably the most complicated network we have a good theoretical understanding of.
Deep networks are also hard to optimize in the worst case, but local search seems to do very well
in practice. Our ultimate goal is to use the study of matrix recovery as a guide in understating the
conditions that enable efficient training of deep networks.
Acknowledgements
Authors would like to thank Afonso Bandeira for discussions, Jason Lee and Tengyu Ma for sharing
and discussing their work. This research was supported in part by an NSF RI/AF grant 1302662.
References
[1] A. S. Bandeira, N. Boumal, and V. Voroninski. On the low-rank approach for semidefinite programs arising
in synchronization and community detection. arXiv preprint arXiv:1602.04426, 2016.
[2] S. Bhojanapalli, A. Kyrillidis, and S. Sanghavi. Dropping convexity for faster semi-definite optimization.
arXiv preprint arXiv:1509.03917, 2015.
[3] S. Burer and R. D. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via
low-rank factorization. Mathematical Programming, 95(2):329?357, 2003.
[4] S. Burer and R. D. Monteiro. Local minima and convergence in low-rank semidefinite programming.
Mathematical Programming, 103(3):427?444, 2005.
[5] E. J. Cand?s. The restricted isometry property and its implications for compressed sensing. Comptes
Rendus Mathematique, 346(9):589?592, 2008.
[6] E. J. Candes and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of
noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342?2359, 2011.
[7] C. Cartis, N. I. Gould, and P. L. Toint. Complexity bounds for second-order optimality in unconstrained
optimization. Journal of Complexity, 28(1):93?108, 2012.
[8] Y. Chen and M. J. Wainwright. Fast low-rank estimation by projected gradient descent: General statistical
and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
8
[9] S. Flammia, D. Gross, Y.-K. Liu, and J. Eisert. Quantum tomography via compressed sensing: Error
bounds, sample complexity and efficient estimators. New Journal of Physics, 14(9):095022, 2012.
[10] R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points?online stochastic gradient for tensor
decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797?842, 2015.
[11] R. Ge, J. Lee, and T. Ma.
arXiv:1605.07272, 2016.
Matrix completion has no spurious local minimum.
arXiv preprint
[12] D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert. Quantum state tomography via compressed
sensing. Physical review letters, 105(15):150401, 2010.
[13] M. Hardt, R. Meka, P. Raghavendra, and B. Weitz. Computational limits for matrix completion. In
Proceedings of The 27th Conference on Learning Theory, pages 703?725, 2014.
[14] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In
Advances in Neural Information Processing Systems, pages 937?945, 2010.
[15] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In
Proceedings of the 45th annual ACM Symposium on theory of computing, pages 665?674. ACM, 2013.
[16] M. Journ?e, F. Bach, P.-A. Absil, and R. Sepulchre. Low-rank optimization on the cone of positive
semidefinite matrices. SIAM Journal on Optimization, 20(5):2327?2351, 2010.
[17] R. H. Keshavan. Efficient algorithms for collaborative filtering. PhD thesis, STANFORD, 2012.
[18] A. Montanari. A Grothendieck-type inequality for local maxima. arXiv preprint arXiv:1603.04064, 2016.
[19] Y. Nesterov and B. T. Polyak. Cubic regularization of newton method and its global performance.
Mathematical Programming, 108(1):177?205, 2006.
[20] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[21] J. D. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In
Proceedings of the 22nd international conference on Machine learning, pages 713?719. ACM, 2005.
[22] C. D. Sa, C. Re, and K. Olukotun. Global convergence of stochastic gradient descent for some non-convex
matrix problems. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15),
pages 2332?2341, 2015.
[23] N. Srebro and T. Jaakkola. Weighted low-rank approximations. In Proceedings of the 20th International
Conference on Machine Learning (ICML-03), pages 720?727, 2003.
[24] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery using nonconvex optimization. In Proceedings
of The 32nd International Conference on Machine Learning, pages 2351?2360, 2015.
[25] J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. preprint arXiv:1602.06664, 2016.
[26] S. Tu, R. Boczar, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear matrix equations via
Procrustes flow. arXiv preprint arXiv:1507.03566, 2015.
[27] H.-F. Yu, P. Jain, P. Kar, and I. Dhillon. Large-scale multi-label learning with missing labels. In Proceedings
of The 31st International Conference on Machine Learning, pages 593?601, 2014.
[28] D. Zhang and L. Balzano. Global convergence of a grassmannian gradient descent algorithm for subspace
estimation. arXiv preprint arXiv:1506.07405, 2015.
[29] T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix estimation. In
Advances in Neural Information Processing Systems, pages 559?567, 2015.
[30] Q. Zheng and J. Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite
programming from random linear measurements. In Advances in Neural Information Processing Systems,
pages 109?117, 2015.
9
| 6271 |@word mild:2 polynomial:4 stronger:1 norm:4 seems:3 nd:3 decomposition:3 sgd:5 sepulchre:1 necessity:1 liu:3 ours:2 past:1 ka:3 recovered:1 comparing:1 subsequent:1 chicago:1 cheap:1 hoping:1 plot:2 update:2 implying:1 stationary:6 parametrization:2 matrix1:1 zhang:1 mathematical:3 along:4 symposium:1 yuan:1 combine:2 deteriorate:1 theoretically:1 indeed:5 hardness:1 bneyshabur:1 cand:1 frequently:1 sdp:1 multi:2 behavior:1 inspired:1 relying:2 decreasing:1 spherical:2 resolve:1 actual:1 preclude:1 increasing:1 becomes:1 spain:1 provided:1 notation:1 bounded:3 factorized:3 bhojanapalli:3 what:3 degrading:1 developed:1 guarantee:13 pseudo:1 every:2 tackle:1 xd:2 stricter:1 exactly:2 rm:3 scaled:1 unit:1 grant:1 arguably:1 positive:1 local:61 limit:1 consequence:1 incoherence:4 fluctuation:1 approximately:3 might:4 black:1 initialization:18 studied:3 suggests:1 ease:1 factorization:4 fazel:1 unique:1 practice:6 block:2 definite:1 xr:7 empirical:1 projection:2 get:7 cannot:1 close:12 onto:1 operator:9 applying:1 seminal:1 writing:1 optimize:1 lagrangian:1 center:1 missing:1 attention:1 starting:4 convex:20 minrank:2 simplicity:1 recovery:18 estimator:2 nuclear:3 orthonormal:10 spanned:1 limiting:1 qq:8 construction:1 heavily:3 rip:23 exact:3 programming:6 boczar:1 eisert:2 satisfying:4 bottom:1 preprint:8 wang:1 worst:2 region:1 ensures:2 sun:2 decrease:1 technological:1 gross:2 vanishes:1 convexity:2 complexity:4 nesterov:2 solving:2 tight:1 serve:1 routinely:1 various:1 regularizer:1 jain:4 fast:2 outside:1 balzano:1 larger:1 stanford:1 rennie:1 relax:2 compressed:3 noisy:10 final:1 online:1 hoc:1 eigenvalue:1 indication:1 product:2 tu:2 combining:1 beneath:1 frobenius:1 ky:1 qr:1 projb:2 convergence:16 optimum:18 requirement:4 converges:2 depending:1 completion:6 odd:1 received:1 sa:1 netrapalli:1 recovering:1 implies:5 uu:1 direction:2 radius:2 stochastic:5 enable:1 require:5 mathematique:1 hx:2 extension:1 hold:4 around:2 considered:2 wright:2 minu:1 algorithmic:1 dictionary:2 smallest:1 uniqueness:1 favorable:1 estimation:3 label:2 weighted:1 minimization:6 hope:1 gaussian:7 always:1 nr3:1 rather:1 ej:3 jaakkola:1 corollary:3 rank:85 absil:1 helpful:1 milder:1 typically:2 unlikely:2 spurious:13 relation:1 journ:1 interested:1 voroninski:1 monteiro:3 arg:1 x11:6 priori:1 plan:1 constrained:5 fairly:1 field:1 aware:3 never:3 having:1 preparing:1 yu:1 k2f:14 icml:2 np:1 sanghavi:2 cheaper:1 phase:3 psd:1 attempt:1 detection:3 freedom:1 possibility:1 investigate:1 zheng:2 introduces:1 analyzed:1 yielding:1 semidefinite:5 x22:2 implication:3 necessary:2 orthogonal:2 re:1 theoretical:3 minimal:1 column:4 kxkf:4 nr2:1 entry:4 answer:1 recht:4 st:1 international:5 siam:2 kut:1 lee:2 physic:1 together:4 thesis:1 initializer:1 huang:1 possibly:1 creating:1 zhao:2 leading:1 suggesting:1 parrilo:1 disconnect:2 satisfy:3 jason:1 sup:1 relied:1 parallel:3 complicated:1 candes:1 identifiability:2 weitz:1 collaborative:3 minimize:2 became:1 who:1 yield:4 yes:1 generalize:1 raghavendra:1 sdps:1 explain:1 reach:3 afonso:1 sharing:1 definition:1 failure:1 e2:2 proof:12 proved:2 hardt:1 ask:1 recall:1 ut:1 dimensionality:1 actually:1 manuscript:1 higher:1 entrywise:1 formulation:3 though:2 furthermore:2 just:2 working:2 hand:2 trust:1 keshavan:2 nonlinear:1 lack:2 perhaps:2 believe:3 requiring:3 true:1 verify:1 hence:4 regularization:2 alternating:3 dhillon:2 deal:1 white:1 complete:2 recently:3 common:3 empirically:1 physical:1 discussed:2 extend:4 measurement:29 significant:5 vec:4 ai:14 meka:2 unconstrained:4 similarly:1 i6:1 soltanolkotabi:1 had:1 yk2:2 align:1 curvature:3 isometry:8 showed:5 recent:5 optimizing:1 scenario:1 nonconvex:3 bandeira:3 inequality:5 binary:1 success:6 arbitrarily:1 discussing:1 kar:1 yi:3 minimum:52 additional:1 somewhat:3 fortunately:1 employed:1 signal:1 semi:1 full:2 multiple:1 match:1 faster:1 burer:3 af:1 long:2 sphere:1 bach:1 retrieval:1 post:1 ensuring:1 prediction:1 noiseless:13 metric:1 arxiv:15 iteration:3 cell:2 singular:4 source:1 flammia:2 posse:1 strict:3 deficient:1 lafferty:2 flow:1 enough:3 iterate:1 zi:1 suboptimal:3 polyak:2 inner:2 idea:1 opposite:1 kyrillidis:1 escaping:1 whether:3 pca:4 bridging:1 ultimate:1 padding:1 becker:1 hessian:4 cause:1 deep:3 generally:1 involve:1 procrustes:1 verifiable:1 coincidentally:1 tomography:3 xij:1 nsf:1 notice:2 arising:1 xii:1 write:1 dropping:1 nevertheless:1 capital:1 asymptotically:1 relaxation:6 olukotun:1 cone:1 enforced:1 run:1 inverse:1 letter:1 appendix:3 scaling:1 toint:1 bound:6 followed:2 guaranteed:2 convergent:1 identifiable:1 oracle:1 annual:1 constraint:9 constrain:1 ri:1 phrased:1 nathan:1 optimality:12 extremely:2 min:1 span:2 tengyu:1 x12:1 gould:1 smaller:1 wi:3 qu:2 making:1 happens:1 restricted:6 pr:1 computationally:1 equation:5 previously:1 discus:1 rendus:1 needed:1 know:1 ge:6 tractable:1 studying:1 neyshabur:1 apply:1 spectral:1 appropriate:1 upto:1 stepsize:1 existence:1 xkf:1 denotes:2 clustering:1 ensure:4 x21:1 newton:1 especially:1 establish:5 implied:1 objective:5 tensor:1 already:2 question:5 dependence:1 nr:4 diagonal:1 hai:5 exhibit:1 gradient:12 subspace:4 distance:3 grassmannian:2 thank:1 gracefully:1 trivial:1 assuming:1 providing:1 statement:1 trace:2 negative:2 observation:3 descent:11 jin:1 truncated:1 situation:1 extended:1 rn:8 arbitrary:1 community:3 ttic:3 namely:1 required:4 optimized:1 established:3 barcelona:1 elliptope:2 nip:1 beyond:2 suggested:1 able:1 usually:2 below:1 regime:1 challenge:2 program:2 rf:2 including:2 wainwright:2 power:1 suitable:1 critical:2 natural:1 rely:1 x2i:2 numerous:1 ready:1 incoherent:1 grothendieck:1 prior:1 understanding:2 acknowledgement:1 nati:1 kf:15 review:2 geometric:1 synchronization:1 expect:1 kakf:1 filtering:2 srebro:3 x33:2 degree:1 sufficient:4 tiny:1 row:2 supported:1 side:2 weaker:2 allow:1 guide:1 institute:1 boumal:1 characterizing:2 taking:1 distributed:2 plain:1 dimension:1 transition:1 depth:1 quantum:3 doesn:1 author:1 projected:1 far:2 transaction:1 approximate:4 global:43 understating:1 xi:3 alternatively:2 don:1 search:10 nature:2 ku:15 transfer:1 obtaining:1 poly:2 spread:1 main:3 montanari:1 noise:4 n2:3 augmented:1 cubic:2 explicit:1 kxk2:1 toyota:1 srinadh:2 theorem:16 down:1 bad:1 specific:2 showing:1 behnam:1 sensing:12 r2:3 exists:1 adding:2 phd:1 kx:9 margin:1 gap:1 chen:2 simply:2 saddle:8 kxk:1 corresponds:1 satisfies:9 acm:3 ma:2 kzkf:1 succeed:1 sized:1 month:1 goal:1 exposition:1 towards:1 absence:1 considerable:1 hard:3 typical:2 specifically:1 uniformly:2 lemma:14 comptes:1 invariance:1 svd:6 cartis:2 succeeds:1 formally:1 pertains:1 evaluate:1 |
5,827 | 6,272 | Preference Completion from Partial Rankings
Suriya Gunasekar
University of Texas, Austin, TX, USA
[email protected]
Oluwasanmi Koyejo
University of Illinois, Urbana-Champaign, IL, USA
[email protected]
Joydeep Ghosh
University of Texas,Austin, TX, USA
[email protected]
Abstract
We propose a novel and efficient algorithm for the collaborative preference completion problem, which involves jointly estimating individualized rankings for a set of
entities over a shared set of items, based on a limited number of observed affinity
values. Our approach exploits the observation that while preferences are often
recorded as numerical scores, the predictive quantity of interest is the underlying
rankings. Thus, attempts to closely match the recorded scores may lead to overfitting and impair generalization performance. Instead, we propose an estimator
that directly fits the underlying preference order, combined with nuclear norm
constraints to encourage low?rank parameters. Besides (approximate) correctness
of the ranking order, the proposed estimator makes no generative assumption on
the numerical scores of the observations. One consequence is that the proposed
estimator can fit any consistent partial ranking over a subset of the items represented as a directed acyclic graph (DAG), generalizing standard techniques that
can only fit preference scores. Despite this generality, for supervision representing
total or blockwise total orders, the computational complexity of our algorithm is
within a log factor of the standard algorithms for nuclear norm regularization based
estimates for matrix completion. We further show promising empirical results for a
novel and challenging application of collaboratively ranking of the associations
between brain?regions and cognitive neuroscience terms.
1
Introduction
Collaborative preference completion is the task of jointly learning bipartite (or dyadic) preferences of
set of entities for a shared list of items, e.g., user?item interactions in a recommender system [14; 22].
It is commonly assumed that such entity?item preferences are generated from a small number of
latent or hidden factors, or equivalently, the underlying preference value matrix is assumed to be
low rank. Further, if the observed affinity scores from various explicit and implicit feedback are
treated as exact (or mildly perturbed) entries of the unobserved preference value matrix, then the
preference completion task naturally fits in the framework of low rank matrix completion [22; 38].
More generally, low rank matrix completion involves predicting the missing entries of a low rank
matrix from a vanishing fraction of its entries observed through a noisy channel. Several low rank
matrix completion estimators and algorithms have been developed in the literature, many with strong
theoretical guarantees and empirical performance [6; 32; 21; 28; 38; 10].
Recent research in the preference completion literature have noted that using a matrix completion
estimator for collaborative preference estimation may be misguided [11; 33; 23] as the observed
entity?item affinity scores from implicit/explicit feedback are potentially subject to systematic
monotonic transformations arising from limitations in feedback collection, e.g., quantization and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
inherent biases. While simple user biases and linear transofmations can be handled within a low
rank matrix framework, more complex transformations like quantization can potentially increase
the rank of the observed preference score matrix significantly, thus adversely affecting recovery
using standard low rank matrix completion [13]. Further, despite the common practice of measuring
preferences using numerical scores, predictions are most often deployed or evaluated based on
the item ranking e.g. in recommender systems, user recommendations are often presented as a
ranked list of items without the underlying scores. Indeed several authors have shown that favorable
empirical/theoretical performance in mean square error for the preference matrix often does not
translate to better performance when performance is measured using ranking metrics [11; 33; 23].
Thus, collaborative preference estimation may be better posed as a collection of coupled learning
to rank (LETOR) problems [25], where we seek to jointly learn the preference rankings of a set of
entities, by exploiting the low dimensional latent structure of the underlying preference values.
This paper considers preference completion in a general collaborative LETOR setting. Importantly,
while the observations are assumed to be reliable indicators for relative preference ranking, their
numerical scores may be quite deviant from the ground truth low rank preference matrix. Therefore,
we aim at addressing preference completion under the following generalizations:
1. In a simple setting, for each entity, a score vector representing the its observed affinity interactions
is assumed to be generated from an arbitrary monotonic transformation of the corresponding
entries of the ground truth preference matrix. We make no further generative assumptions on
observed scores beyond monotonicity with respect to the underlying low rank preference matrix.
2. We also consider a more general setting, where observed preferences of each entity represent
specifications of a partial ranking in the form of a directed acyclic graph (DAG) ? the nodes
represent a subset of items, and each edge represents a strict ordering between a pair of nodes.
Such rankings may be encountered when the preference scores are consolidated from multiple
sources of feedback, e.g., comparative feedback (pairwise or listwise) solicited for independent
subsets of items. This generalized setting cannot be handled by standard matrix completion
without some way of transforming the DAG orderings into a score vector.
Our work is in part motivated by an application to neuroimaging meta-analysis as outlined in the
following. Cognitive neuroscience aims to quantify the link between brain function with behavior.
This interaction is most often measured in humans using Functional Magnetic Resonance Imaging
(fMRI) experiments that measure brain activity in response to behavioral tasks. After analysis,
the conclusions are often summarized in neuroscience publications which include a table of brain
locations that are most actively activated in response to an experimental stimulus. These results
can then be synthesized using meta-analysis techniques to derive accurate predictions of brain
activity associated with cognitive terms (also known as forward inference) and prediction of cognitive
terms associated with brain regions (also known as reverse inference). For our study, we used data
from neurosynth [36] - a public repository1 which automatically scrapes information on published
associations between brain regions and terms in cognitive neuroscience experiments.
The key contributions of the paper are summarized below.
? We propose a convex estimator for low rank preference completion using limited supervision,
addressing: (a) arbitrary monotonic transformations of preference scores; and (b) partial rankings
over items (Section 3.1). We derive generalization error bounds for a surrogate ranking loss that
quantifies the trade?off between data?fit and regularization (Section 5).
? We propose efficient algorithms for the estimate under total and partially ordered observations.
In the case of total orders, in spite of increased generality, the computational complexity of our
algorithm is within a log factor of the standard convex algorithms for matrix completion (Section 4).
? The proposed algorithm is evaluated for a novel application of identifying associations between
brain?regions and cognitive terms from the neurosynth dataset [37] (Section 6). Such a large scale
meta-analysis synthesizing information from the literature and related tasks has the potential to
lead to novel insights into the role of brain regions in cognition and behavior.
1.1
Notation
d1 ?d2
For a matrix
, let ?1 ? ?2 ? . . . be singular values of M . Then,p
nuclear
PM ? R
P 2 norm
kM k? =
i ?i , operator norm kM kop = ?1 , and Frobenius norm kM kF =
i ?i . Let
1
http://neurosynth.org/
2
[N ] = {1, 2, . . . , N }. A vector or a set x indexed by j ? [N ] is sometimes denoted as (xj )N
j=1 or
simply (xj ) whenever N is unambiguous. Let ? ? [d1 ] ? [d2 ] denote a subset of indices of a matrix
in Rd1 ?d2 . For j ? [d2 ], let ?j = {(i0 , j 0 ) ? ? : j 0 = j} ? ? denotes the subset of entries in ?
|?|
from the j th column. Given ? = {(is , js ) : s = 1, 2, . . . , |?|}, P? : X ? (Xis js )s=1 ? R|?| is the
linear subsampling operator, and P?? : R|?| ? Rd1 ?d2 is its adjoint, i.e hy, P? (X)i = hX, P?? (y)i.
For conciseness, we sometimes use the notation X? to denote P? (X).
2
Related Work
Matrix Completion: Low rank matrix completion has an extensive literature; a few examples
include [22; 6; 21; 28] among several others. However, the bulk of these works including those in the
context of ranking/recommendation applications focus on (a) fitting the observed numerical scores
using squared loss, and (b) evaluating the results on parameter/rating recovery metrics such as root
mean squared error (RMSE). The shortcomings of such estimators and results using squared loss in
ranking applications have been studied in some recent research [12; 11]. Motivated by collaborative
ranking applications, there has been growing interest in addressing matrix completion within an
explicit LETOR framework. Weimer et al. [35] and Koyejo et al. [23] propose estimators that involve
non?convex optimization problems and their algorithmic convergence and generalization behavior are
not well understood. Some recent works provide parameter recovery guarantees for pairwise/listwise
ranking observations under specific probabilistic distributional assumptions on the observed rankings
[31; 26; 29]. In comparison, the estimators and algorithms in this paper are agnostic to the generative
distribution, and hence have much wider applicability.
Learning to rank (LETOR): LETOR is a structured prediction task of rank ordering relevance of a
list of items as a function of pre?selected features [25]. Currently, leading algorithms for LETOR
are listwise methods [9] (as is the approach taken in this paper), which fully exploit the ranking
structure of ordered observations, and offer better modeling flexibility compared to the pointwise
[24] and pairwise methods [16; 18]. A recent listwise LETOR algorithm proposed the idea of
monotone retargeting (MR) [2], which elegantly addresses listwise learning to rank (LETOR) task
while maintaining the relative simplicity and scalability of pointwise estimation. MR was further
extended to incorporate margins in the margin equipped monotonic retargeting (MEMR) formulation
[1] to preclude trivial solutions that arise from scale invariance of the initial MR estimate in Acharyya
et al. [2]. The estimator proposed in the paper is inspired from the the idea of MR and will be revisited
later in the paper. In collaborative preference completion, rather than learning a functional mapping
from features to ranking, we seek to exploit the low rank structure in jointly modeling the preferences
of a collection of entities without access to preference indicative features.
Single Index Models (SIMs) Finally, literature on monotonic single index models (SIMs) also
considers estimation under unknown monotonic transformations [17; 20]. However, algorithms for
SIMs are designed to solve a harder problem of exactly estimating the non?parametric monotonic
transformation and are evaluated for parameter recovery rather than the ranking performance. In
general, with no further assumptions, sample complexity of SIM estimators lends them unsuitable for
high dimensional estimation. The existing high dimensional estimators for learning SIMs typically
assume Lipschitz continuity of the monotonic transformation which explicitly uses the observed score
values in bounding the Lipsciptz constant of the monotonic transformation [19; 13]. In comparison,
our proposed model is completely agnostic to the numerical values of the preference scores.
3
Preference Completion from Partial Rankings
Let the unobserved true preference scores of d2 entities for d1 items be denoted by a rank r
min {d1 , d2 } matrix ?? ? Rd1 ?d2 . For each entity j ? [d2 ], we observe a partial or total ordering of
preferences for a subset of items denoted by Ij ? [d1 ]. Let nj = |Ij | denotes the number of items
over which relative preferences of entity
S j are observed, so that ?j = {(i, j) : i ? Ij } denotes the
entity-item index set for j, and ? = j ?j denotes the index set collected across entities. Let P?
denote the sampling distribution for ?. The observed preferences of entity j are typically represented
by a listwise preference score vector y (j) ? Rnj .
?j ? [d2 ], y (j) = gj (P?j (?? + W )),
3
(1)
where each (gj ) are an arbitrary and unknown monotonic transformations, and W ? Rd1 ?d2 is some
non?adversarial noise matrix sampled from the distribution PW . The preference completion task is to
estimate a unseen rankings within each column of ?? from a subset of orderings (?j , y (j) )j?[d2 ] .
As (gj ) are arbitrary, the exact values of (y (j) ) are inconsequential, and the observed preference order
can be specified by a constraint set parameterized by a margin parameter as follows:
Definition 1 (?margin Isotonic Set) The following set of vectors are isotonic to y ? Rn with an
> 0 margin parameter:
Rn? (y) = {x ? Rn : ? i, k ? [n], yi < yk ? xi ? xk ? }.
In addition to score vectors, isotonic sets of the form Rn? (y) are equivalently defined for any
DAG y = G([n], E) which denotes a partial ranking among the vertices, with the convention that
(i, k) ? E ? ?x ? Rn? (y), xi ? xk ? . We note from Definition 1 that ties are not broken at
random, e.g., if yi1 = yi2 < yk , then ?x ? Rn? (y), xi1 ? xk ? , xi2 ? xk ? , but no particular
ordering between xi1 and xi2 is specified.
Let y(k) denote the k th smallest entry of y ? Rn . We distinguish between three special cases of an
observation y representing a partial ranking over [n].
(A) Strict Total Order: y(1) < y(2) < . . . < y(n) .
(B) Blockwise Total Order: y(1) ? y(2) ? . . . ? y(n) , with K ? n unique values.
(C) Arbitrary DAG: Partial order induced by a DAG y = G([n], E).
3.1
Monotone Retargeted Low Rank Estimator
Consider any scalable pointwise learning algorithm that fits a model to exact preferences scores.
Since no generative model (besides monotonicity) is assumed for the raw numerical scores in the
observations, in principle, the scores y (j) for entity j can be replaced or retargeted to any rankingn
preserving scores, i.e., by any vector in R?j (y (j) ). Monotone Retargeting (MR) [2] exploits this
observation to address the combinatorial listwise ranking problem [25] while maintaining the relative
simplicity and scalability of pointwise estimates (regression). The key idea in MR is to alternately fit
a pointwise algorithm to current relevance scores, and retarget the scores by searching over the space
of all monotonic transformations of the scores. Our approach extends and generalizes monotone
retargeting for the preference prediction task.
n
We begin by motivating an algorithm for the noise free setting, where it is clear that ???j ? R?j (y (j) ),
so we seek to estimate a candidate preference matrix X that is in the intersection of (a) the data
n
constraints from the observed preference rankings {X?j ? R?j (y (j) )}, and (b) the model constraints
? in this case low rankness induced by constraining the nuclear norm kXk? . For robust estimation
in the presence of noise, we may extend the noise free approach by incorporating a soft penalty on
constraint violations. Let z ? R|?| , and with slight abuse of notation, let z?j ? Rnj denote vector
of the entries of z ? R|?| corresponding to ?j ? ?. Upon incorporating the soft penalties, the
monotone retargeted low rank estimator is given by:
1
n
Xb = Argmin min ?kXk? + kz ? P? (X)k22 s.t.?j, z?j ? R?j (y (j) ),
(2)
2
z?R|?|
X
where the parameter ? controls the trade?off between nuclear norm regularization and data fit,
and Xb is the set of minimizers of (2). We note that Rn? (y) is convex, and ?? ? 1, the scaling
?Rn? (y) = {?x ? x ? Rn? (y)} ? Rn? (y). The above estimate can be computed using efficient
convex optimization algorithms and can handle arbitrary monotonic transformation of the preference
scores, thus providing higher flexibility compared to the standard matrix completion.
Although (2) is specified in terms of two parameters, due to the geometry of the problem, it turns out
that ? and are not jointly identifiable, as discussed in the following proposition.
Proposition 1 The optimization in (2) is jointly convex in (X, z). Further, ?? > 0, (?, ?) and
(? ?1 ?, ) lead to equivalent estimators, specifically Xb(?, ?) = ? ?1 Xb(? ?1 ?, ).
Since, positive scaling of Xb preserves the resultant preference order, using Proposition 1 without loss
of generality, only one of or ? requires tuning with the other remaining fixed.
4
4
Optimization Algorithm
The optimization problem in (2) is jointly convex in (X, z). Further, we later
proximal
P show that the
n
operator of the non?differential component of the estimate ?kXk? + j I(z?j ? R?j (y (j) )) is
efficiently computable. This motivates using the proximal gradient descent algorithm [30] to jointly
update (X, z). For an appropriate step size ? = 1/2 and the resulting updates are as follows:
? X Update: Singular Value Thresholding The proximal operator for ? k.k? is the singular value
thresholding operator S? . For X with singular value decomposition X = U ?V and ? ? 0,
S? (X) = U s? (?)V, where s? is the soft thresholding operator given by s? (x)i = max{xi ? ?, 0}.
? z Update: Parallel Projections For hard constraints on z, the proximal operator at v is the
n
Euclidean projection on the constraints given by z ? argminz kz?vk22 , s.t. z?j ? R?j (y (j) ) ?j ?
[d2 ]. These updates decouple along each entity (column) z?j and can be trivially parallelized.
n
Efficient projections onto R?j (y (j) ) are discussed Section 4.1.
(j)
Algorithm 1 Proximal Gradient Descent for (2) with input ?, {yj }, and paramter ?
for k = 0, 1, 2, . . . , Until (stopping criterion)
(k)
X (k+1) = S?/2 X (k) + 12 (P?? (z (k) ? X? ) ,
(k+1)
?j, z?j
4.1
= ProjRnj (y
?
z(k) +X (k)
?j
j)
?j
2
(3)
.
(4)
Projection onto Rn
? (y)
We begin with the following definitions that are used in characterizing Rn? (y).
Definition 2 (Adjacent difference operator) The adjacent difference operator in Rn , denoted by
Dn : Rn ? Rn?1 is defined as (Dn x)i = xi ? xi+1 , for i ? [n ? 1].
Definition 3 (Incidence Matrix) For a directed graph G(V, E), the incidence matrix AG ? R|V |?|E|
is such that: if the j th directed edge ej ? E is from ith node to k th node, then (AG )ij = 1,
(AG )kj = ?1, and (AG )lj = 0, ?l 6= i or k.
Projection onto Rn? (y) is closely related to the isotonic regression problem of finding a univariate
least squares fit under consistent order constraints (without margins). This isotonic regression problem
in Rn can be solved exactly in O(n) complexity using the classical Pool of Adjacent Violators (PAV)
algorithm [15; 4] as:
0
PAV(v) = argmin||z 0 ? v||2 s.t. zi0 ? zi+1
? 0.
(5)
z 0 ?Rn
As we discuss, simple adaptations of isotonic regression can be used for projection onto -margin
isotonic sets for the three special cases of interest as summarized in Table 1.
(A) Strict Total Order: y(1) < y(2) < . . . y(n)
In this setting, the constraint set can be characterized as Rn? (y) = {x : Dn x ? ?1}, where 1 is a
vector of ones. For this case projection onto Rn? (y) differs from (5) only in requiring an ?separation
and a straight forward extension of the PAV algorithm [4] can be used. Let dsl ? Rn be any vector
such that 1 = ?Dn dsl , then by simple substitutions, ProjRn (y) (x) = PAV(x ? dsl ) + dsl .
?
(B) Blockwise Total Order: y(1) ? y(2) ? . . . ? y(n)
This is a common setting for supervision in many preference completion applications, where the
listwise ranking preferences obtained from ratings over discrete quantized levels 1, 2, . . . , K, with
K n are prevalent. Let y be partitioned into K ? n blocks P = {P1 , P2 , . . . PK }, such that the
entries of y within each partition are equal, and the blocks themselves are strictly ordered,
i.e., ?k ? [K], sup y(Pk?1 ) < inf y(Pk ) = sup y(Pk ) < inf y(Pk+1 ),
where P0 = PK+1 = ?, and y(P ) = {yi : i ? P }.
5
PK
bl
Let dbl ? Rn be such that dbl
=
=
k=1 k Ii?Pk is a vector of block indices d
i
>
[1, 1, ..2, 2, ..K, K, .., K] . Let ?P be a set of valid permutations that permute entries only within
blocks {Pk ? P }, then Rn? (y) = {x : ?? ? ?P , Dn ?(x) ? ?Dn dbl }. We propose the following
steps to compute zb = ProjRn (y) (x) in this case:
?
Step 1. ? ? (x) s.t. ?k ? [K], ? ? (x)Pk = sort(xPk )
(6)
Step 2. zb = P AV (? ? (x) ? dbl ) + dbl .
The correctness of (6) is summarized by the following Lemma.
Lemma 2 Estimate zb from (6) is the unique minimizer for
argminkz ? xk22 s.t. ?? ? ?P : Dn ?(z) ? ?Dn dbl .
z
(C) Arbitrary DAG: y = G([n], E)
An arbitrary DAG (not necessarily connected) can be used to represent any consistent order constraints
over its vertices, e.g., partial rankings consolidated from multiple listwise/pairwise scores. In this
case, the ?margin isotonic set is given by Rn? (y) = {x : A>
G x ? ?1} (c.f. Definition 3).
Consider dDAG ? Rn such that ith entry dDAG
is
the
length
of
the
longest
directed chain connecting
i
the topological descendants of the node i. It can be easily verified that, the isotonic regression
algorithm for arbitrary DAGs applied on x ? dDAG gives the projection onto Rn? (y). In this most
general setting, the best isotonic regression algorithm for exact solution requires O(nm2 + n3 log n2 )
computation [34], where m is the number of edges in G. While even in the best case of m = o(n),
the computation can be prohibitive, we include this case for completeness. We also note that this
case of partial DAG ordering cannot be handled in the standard matrix completion setting without
consolidating the partial ranks to total order.
Rn
? (y)
(A)
(B)
(C)
ProjRn
?
{x : Dn x ? ?1}
{x : ?? ? ?P , Dn ?(x) ? ?1}
{x : A>
G x ? ?1}
(y) (x)
sl
Computation
sl
PAV(x ? d ) + d
?1
?P?
PAV(?P? (x) ? dbl ) + dbl )
IsoReg(x ? dDAG , G)+dDAG [34]
O(n)
O(n log n)
O(n2 m + n3 log n)
Table 1: Summary of algorithms for ProjRn (y) (x)
?
4.2
Computational Complexity
It can be easily verified that gradient of 12 kP? (X) ? zk22 is 2?Lipschitz continuous. Thus, from
standard results on convegence proximal gradient descent [30], Algorithm 1,converges to within
an error in objective in O(1/) iterations. Compared to proximal algorithms for standard matrix
completion [5; 27], the additional complexity in Algorithm 1 arises in the z update (4), which is a
(k)
simple substitution z (k) = X? in standard matrix completion. For total orders, the z update of (4)
is highly efficient and is asymptotically within an additional log |?| factor of the computational costs
for standard matrix completion.
5
Generalization Error
Recall that yj are (noisy) partial rankings of subset of items for each user, obtained from gj (??j + Wj )
where W is a noise matrix and gj are unknown and arbitrary transformations that only preserve that
ranking order within each column. The estimator and the algorithms described so far are independent
of the sampling distribution generating (?, {yj }). In this section we quantify simple generalization
error bounds for (2).
Assumption 1 (Sampling (P? )) For a fixed W and ?? , we assume the following sampling distribution. Let be c0 a fixed constant and R be pre?specified parameter denoting the length of single
listwise observation. For s = 1, 2, . . . , |S| = c0 d2 log d2 ,
j(s) ? uniform[d2 ],
?(s) = {(i, j(s)) : i ? I(s)},
I(s) ? randsample([d1 ], R),
y(s) = gj(s) (P?(s) (?? + W )).
6
(7)
Further, we define the notation: ?j, Ij =
S
s:j(s)=j
I(s),
?j =
S
s:j(s)=j
?(s), and nj = |?j |.
For each column j, the listwise scores {y(s) : j(s) = j} jointly define a consistent partial ranking of
Ij as the scores are subsets of a monotonically transformed preference vector gj (??j + Wj ). This
consistent ordering is represented by a DAG y (j) = PartialOrder({y(s) : j(s) = j}). We also note
that O(d2 log d2 ) samples ensures that each column is included in the sampling with high probability.
Definition 4 (Projection Loss) Let y = G([n], E) or y ? Rn define a partial ordering or total order
in Rn , respectively. We define the following convex surrogate loss over partial rankings.
?(x, y) = minz?Rn? (y) kx ? zk2
b be an estimate from (2). With appropriate scaling let
Theorem 3 (Generalization Bound) Let X
b
kXkF = 1 , then for constants K1 K2 , the following holds with probability greater than 1 ? ? over
all observed rankings {y (j) , ?j : j ? [d2 ]} drawn from (7) with |S| ? c0 d2 log d2 :
s
s
|S|
X
b ? log1/4 d d log d
1
k
Xk
log 2/?
b?(s) , y(s)) ?
b?(s) , y(s)) + K1
?
Ey(s),?(s) ?(X
?(X
+ K2
.
|S| s=1
R|S|
|S|
d1 d2
Theorem 3 quantifies the test projection loss over a random R length items I(s) drawn for a random
entity/user j(s). The bound provides a trade?off between observable training error and complexity
defined by nuclear norm of the estimate.
6
Experiments
We evaluate our model on two collaborative preference estimation tasks: (a) a standard user-item
recommendataion task on a benchmarked dataset from Movielens, and (b) identifying associations
between brain?regions and cognitive terms using the neurosynth dataset [37].
Baselines: The following baseline models are compared in our experiments:
? Retargeted Matrix Completion (RMC): the estimator proposed in (2).
? Standard Matrix Completion (SMC) [8]: We primarily compare our estimator with the
standard convex estimator for matrix completion using nuclear norm minimization.
? Collaborative Filtering Ranking CoFi-Rank [35]: This work addresses collaborative filtering
task in a listwise ranking setting.
For SMC and MRPC, the hyperparameters were tuned using grid search on a logarithmic scale.
Due to high computational cost with tuning parameters in CofiRank, we use the code and default
parameters provided by the authors.
Evaluation metrics: The performance on preference estimation tasks are evaluated on four ranking metrics: (a) Normalized Discounted Cummulative Gains (NDCG@N), (b) Precision@N, (c)
Spearmann Rho, and (d) Kendall Tau, where the later two metrics measure the correlation of the
complete ordering of the list, while the former two metrics primarily evaluate the correctness of
ranking in the top of the list (see Liu et. al. [25] for further details on these metrics).
Movielens dataset (blockwise total order) Movielens is a movie recommendation website administered by GroupLens Research. We used competitive benchmarked movielens 100K dataset. We
used the 5?fold train/test splits provided with the dataset (the test splits are non-overlapping). We
discarded a small number of users that had less than 10 ratings in any of 5 training data splits. The
resultant dataset consists of 923 users and 1682 items. The ratings are blockwise ordered ? taking
one of 5 values in the set {1, 2, . . . , 5}. During testing, for each user, the competing models return
a ranking of the test-items, and the performance is averaged across test-users. Table 2 presents the
results of our evaluation averaged across 5 train/test splits on the Movielens dataset, along with the
standard deviation. We see that the proposed retargeted matrix completion (RMC) significantly and
consistently outperforms SMC and CoFi-Rank [35] across ranking metrics.
7
NDCG@5
Precision@5
Spearman Rho
Kendall Tau
RMC
0.7984(0.0213) 0.7546(0.0320) 0.4137(0.0099) 0.3383(0.0117)
SMC
0.7863(0.0243) 0.7429(0.0295) 0.3722(0.0106) 0.3031(0.0117)
CoFi-Rank 0.7731(0.0213) 0.7314(0.0293) 0.3681(0.0082) 0.2993(0.0110)
Table 2: Ranking performance for recommendations in Movielens 100K. Table shows mean and standard
deviation over 5 fold train/test splits. For all reported metrics, higher values are better [25].
Neurosynth Dataset (almost total order) Neurosynth[37] is a publicly available database consisting of data automatically extracted from a large collection of functional magnetic resonance
imaging (fMRI) publications (11,362 publications in current version). For each publication , the
database contains the abstract text and all reported 3-dimensional peak activation coordinates in
the study. The text is pre-processed to remove common stop-words, and any text with less than
.1% frequency, leaving a total of 3169 terms. We applied the standard brain map to the activations,
removing voxels outside of the grey matter. Next the activations were downsampled from 2mm3
voxels to 10mm3 voxels using the nilearn python package, resulting in a total of 1231 dense voxels.
The affinity measure between 3169 terms and 1231 consolidated voxels is obtained by multiplying
the term ? publication and the publication ? voxels matrices. The resulting data is dense high-rank
preference matrix. With very few tied preference values, this setting best fits the case of total ordered
observations (case A in Section 4.1). Using this data, we consider the reverse inference task of
ranking cognitive concepts (terms) for each brain region (voxel) [37].
Train-val-test: We used 10% of randomly sampled entries of the matrix as test data and a another
10% for validation. We created training datasets at various sample sizes by subsampling from the
remaining 80% of the data. This random split is replicated multiple times to obtain 3 bootstrapped
datasplits (note that unlike cross validation, the test datasets here can have some overlapping entries).
The results in Fig. 1 show that the proposed estimate from (2) outperforms standard matrix completion
in terms of popular ranking metrics.
Figure 1: Ranking performance for reverse inference in Neurosynth data. x-axis denotes the fraction of the
affinity matrix entries used as observations in training. Plots show mean with errorbars for standard deviation
over 3 bootstrapped train/test splits. For all the reported ranking metrics, higher values are better[25].
7
Conclusion
Our work addresses the problem of collaboratively ranking; a task of growing importance to modern
problems in recommender systems, large scale meta-analysis, and related areas. We proposed a
novel convex estimator for collaborative LETOR from sparsely observed preferences, where the
observations could be either score vectors representing total order, or more generally directed acyclic
graphs representing partial orders. Remarkably, in the case of complete order, the complexity of our
algorithm is within a log factor of the state?of?the?art algorithms for standard matrix completion.
Our estimator was empirically evaluated on real data experiments.
Acknowledgments SG and JG acknowledge funding from NSF grants IIS-1421729 and SCH 1418511.
8
References
[1] S. Acharyya and J. Ghosh. MEMR: A margin equipped monotone retargeting framework for ranking. In
UAI, 2013.
[2] S. Acharyya, O. Koyejo, and J. Ghosh. Learning to rank with bregman divergences and monotone
retargeting. In UAI, 2012.
[3] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. JMLR, 2003.
[4] M. J. Best and N. Chakravarti. Active set algorithms for isotonic regression; a unifying framework. Math.
Program., 1990.
[5] J. F. Cai, E. J. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
J. Optim., 2010.
[6] E. J. Cand?s and Y. Plan. Matrix completion with noise. Proc. IEEE, 2010.
[7] E. J. Cand?s and B. Recht. Exact matrix completion via convex optimization. FoCM, 2009.
[8] E. J. Cand?s, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. Inf. Theory, 2006.
[9] Z. Cao, T. Qin, T. Y. Liu, M. F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise
approach. In ICML, 2007.
[10] E. Chi, H. Zhou, G. Chen, D. O. Del Vecchyo, and K. Lange. Genotype imputation via matrix completion.
Genome Res., 2013.
[11] P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-n recommendation
tasks. In RecSys. ACM, 2010.
[12] J. C. Duchi, L. W Mackey, and M. I. Jordan. On the consistency of ranking algorithms. In ICML, 2010.
[13] R. Ganti, L. Balzano, and R. Willett. Matrix completion under monotonic single index models. In NIPS,
2015.
[14] D. Goldberg, D. Nichols, B. Oki, and D. Terry. Using collaborative filtering to weave an information
tapestry. Commun. ACM, 1992.
[15] S.J. Grotzinger and C. Witzgall. Projections onto order simplexes. Appl. Math. Optim., 1984.
[16] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In NIPS,
1999.
[17] J. L. Horowitz. Semiparametric and nonparametric methods in econometrics, volume 12. Springer, 2009.
[18] T. Joachims. Optimizing search engines using clickthrough data. In SIGKDD, 2002.
[19] S. M. Kakade, V. Kanade, O. Shamir, and A. Kalai. Efficient learning of generalized linear and single
index models with isotonic regression. In NIPS, 2011.
[20] A. T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009.
[21] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on
IT, 2010.
[22] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. IEEE
Computer, 2009.
[23] O. Koyejo, S. Acharyya, and J. Ghosh. Retargeted matrix factorization for collaborative filtering. In
RecSys, 2013.
[24] P. Li, Q. Wu, and C. J. Burges. Mcrank: Learning to rank using multiple classification and gradient
boosting. In NIPS, 2007.
[25] T. Y. Liu. Learning to rank for information retrieval. Foundations and Trends in IR, 2009.
[26] Y. Lu and S. N. Negahban. Individualized rank aggregation using nuclear norm regularization. In Annual
Allerton Conference on Communication, Control, and Computing (Allerton), 2015.
[27] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization.
Math. Program., 2011.
[28] A. Mnih and R. Salakhutdinov. Probabilistic matrix factorization. In NIPS, 2007.
[29] S. Oh, K. K. Thekumparampil, and J. Xu. Collaboratively learning preferences from ordinal data. In NIPS,
2015.
[30] N. Parikh and S. P. Boyd. Proximal algorithms. Foundations and Trends in optimization, 2014.
[31] D. Park, J. Neeman, J. Zhang, S. Sanghavi, and I. Dhillon. Preference completion: Large-scale collaborative
ranking from pairwise comparisons. In ICML, 2015.
[32] B. Recht. A simpler approach to matrix completion. JMLR, 2011.
[33] H. Steck. Training and testing of recommender systems on data missing not at random. In KDD. ACM,
2010.
[34] Q. F. Stout. Isotonic regression via partitioning. Algorithmica, 2013.
[35] M. Weimer, A. Karatzoglou, Q. V. Le, and A. J. Smola. COFIRANK - maximum margin matrix factorization
for collaborative ranking. In NIPS, 2008.
[36] T. Yarkoni, R. A. Poldrack, T. E. Nichols, D. C. Van Essen, and T. D. Wager. Large-scale automated
synthesis of human functional neuroimaging data. Nat. Methods, 2011.
[37] Tal Yarkoni. http://neurosynth.org/. http://neurosynth.org/, 2011.
[38] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan. Large-scale parallel collaborative filtering for the netflix
prize. In Algorithmic Aspects in Information and Management, LNCS 5034, 2008.
9
| 6272 |@word version:1 pw:1 norm:10 mcrank:1 c0:3 d2:22 km:3 seek:3 grey:1 steck:1 decomposition:1 p0:1 harder:1 initial:1 substitution:2 liu:3 score:33 contains:1 denoting:1 tuned:1 bootstrapped:2 neeman:1 outperforms:2 existing:1 current:2 optim:2 incidence:2 ganti:1 activation:3 numerical:7 partition:1 kdd:1 remove:1 designed:1 plot:1 update:7 mackey:1 generative:4 selected:1 prohibitive:1 item:21 website:1 indicative:1 xk:5 yi1:1 ith:2 vanishing:1 prize:1 completeness:1 quantized:1 node:5 location:1 preference:57 revisited:1 org:3 provides:1 math:3 herbrich:1 boosting:1 allerton:2 along:2 dn:10 zhang:1 differential:1 simpler:1 descendant:1 consists:1 fitting:1 weave:1 behavioral:1 pairwise:6 chakravarti:1 indeed:1 behavior:3 p1:1 themselves:1 growing:2 cand:3 brain:12 chi:1 inspired:1 discounted:1 salakhutdinov:1 automatically:2 equipped:2 preclude:1 spain:1 estimating:2 underlying:6 notation:4 begin:2 agnostic:2 provided:2 argmin:2 consolidated:3 benchmarked:2 developed:1 ghosh:5 unobserved:2 transformation:12 nj:2 ag:4 guarantee:2 finding:1 tie:1 exactly:2 k2:2 oki:1 control:2 partitioning:1 grant:1 neurosynth:9 positive:1 understood:1 consequence:1 despite:2 abuse:1 inconsequential:1 ndcg:2 studied:1 challenging:1 appl:1 limited:2 zi0:1 smc:4 factorization:4 averaged:2 directed:6 unique:2 acknowledgment:1 yj:3 testing:2 practice:1 block:4 differs:1 convegence:1 lncs:1 area:1 empirical:3 bell:1 significantly:2 projection:11 boyd:1 pre:3 word:1 downsampled:1 spite:1 cremonesi:1 cannot:2 onto:7 operator:9 romberg:1 context:1 risk:1 isotonic:14 equivalent:1 map:1 missing:2 oluwasanmi:1 rnj:2 convex:11 shen:1 simplicity:2 recovery:4 identifying:2 estimator:21 insight:1 importantly:1 nuclear:8 oh:2 searching:1 handle:1 coordinate:1 shamir:1 user:10 exact:6 us:1 goldberg:1 trend:2 econometrics:1 sparsely:1 distributional:1 database:2 observed:17 role:1 solved:1 region:7 wj:2 connected:1 ensures:1 ordering:10 trade:3 yk:2 transforming:1 broken:1 complexity:9 wilkinson:1 predictive:1 upon:1 bipartite:1 completely:1 easily:2 represented:3 tx:2 various:2 train:5 shortcoming:1 kp:1 outside:1 quite:1 balzano:1 posed:1 solve:1 rho:2 unseen:1 jointly:9 noisy:2 cai:1 propose:6 reconstruction:1 interaction:3 adaptation:1 qin:1 cao:1 translate:1 flexibility:2 adjoint:1 frobenius:1 scalability:2 exploiting:1 convergence:1 letor:9 rademacher:1 comparative:1 generating:1 converges:1 wider:1 derive:2 completion:42 measured:2 ij:6 sim:1 p2:1 strong:1 involves:2 quantify:2 convention:1 closely:2 human:2 nilearn:1 karatzoglou:1 public:1 hx:1 generalization:7 proposition:3 extension:1 strictly:1 hold:1 ground:2 cognition:1 algorithmic:2 mapping:1 collaboratively:3 smallest:1 estimation:8 favorable:1 proc:1 combinatorial:1 currently:1 grouplens:1 utexas:2 schreiber:1 correctness:3 thekumparampil:1 minimization:2 gaussian:1 aim:2 rather:2 kalai:2 zhou:2 ej:1 publication:6 focus:1 joachim:1 longest:1 consistently:1 rank:33 prevalent:1 sigkdd:1 adversarial:1 baseline:2 zk22:1 inference:4 minimizers:1 stopping:1 i0:1 typically:2 lj:1 hidden:1 transformed:1 tao:1 among:2 colt:1 classification:1 denoted:4 resonance:2 art:1 special:2 plan:1 equal:1 sampling:5 represents:1 park:1 icml:3 fmri:2 others:1 stimulus:1 sanghavi:1 inherent:1 few:3 primarily:2 modern:1 randomly:1 preserve:2 divergence:1 replaced:1 geometry:1 consisting:1 algorithmica:1 isotron:1 attempt:1 interest:3 highly:2 mnih:1 essen:1 evaluation:2 violation:1 genotype:1 activated:1 xb:5 chain:1 wager:1 accurate:1 bregman:2 edge:3 encourage:1 partial:17 solicited:1 indexed:1 incomplete:1 euclidean:1 re:1 theoretical:2 joydeep:1 increased:1 column:6 modeling:2 soft:3 kxkf:1 measuring:1 applicability:1 cost:2 addressing:3 subset:9 entry:14 vertex:2 consolidating:1 uniform:1 deviation:3 stout:1 motivating:1 reported:3 perturbed:1 proximal:8 combined:1 recht:2 peak:1 siam:1 negahban:1 systematic:1 off:3 probabilistic:2 xi1:2 pool:1 connecting:1 synthesis:1 squared:3 recorded:2 management:1 cognitive:8 adversely:1 horowitz:1 repository1:1 leading:1 return:1 actively:1 li:2 potential:1 summarized:4 matter:1 explicitly:1 ranking:51 later:3 root:1 kendall:2 sup:2 competitive:1 sort:1 aggregation:1 parallel:2 netflix:1 candes:1 yarkoni:2 rmse:1 collaborative:16 contribution:1 il:1 ir:1 square:2 vk22:1 publicly:1 efficiently:1 dsl:4 raw:1 lu:1 multiplying:1 published:1 straight:1 whenever:1 definition:7 volinsky:1 frequency:2 simplexes:1 naturally:1 associated:2 conciseness:1 resultant:2 sampled:2 gain:1 dataset:9 stop:1 popular:1 recall:1 graepel:1 dbl:8 higher:3 response:2 formulation:1 evaluated:5 generality:3 implicit:2 smola:1 until:1 correlation:1 keshavan:1 overlapping:2 del:1 continuity:1 usa:3 k22:1 requiring:1 true:1 normalized:1 concept:1 former:1 regularization:4 hence:1 nichols:2 dhillon:1 goldfarb:1 adjacent:3 pav:6 during:1 turrin:1 unambiguous:1 noted:1 criterion:1 generalized:2 complete:2 mm3:2 duchi:1 novel:5 funding:1 parikh:1 common:3 functional:4 empirically:1 poldrack:1 volume:1 association:4 extend:1 slight:1 discussed:2 synthesized:1 willett:1 dag:11 tuning:2 outlined:1 pm:1 trivially:1 grid:1 consistency:1 illinois:2 sastry:1 had:1 jg:1 specification:1 access:1 supervision:3 gj:7 j:2 recent:4 optimizing:1 inf:3 commun:1 reverse:3 meta:4 yi:2 preserving:1 additional:2 greater:1 mr:6 ey:1 parallelized:1 monotonically:1 signal:1 ii:2 multiple:4 champaign:1 match:1 characterized:1 offer:1 cross:1 retrieval:1 gunasekar:1 prediction:5 scalable:1 regression:11 metric:11 iteration:1 represent:3 sometimes:2 affecting:1 addition:1 remarkably:1 semiparametric:1 koyejo:4 singular:5 source:1 leaving:1 sch:1 unlike:1 strict:3 subject:1 induced:2 cummulative:1 tapestry:1 jordan:1 structural:1 presence:1 constraining:1 split:7 automated:1 xj:2 fit:10 zi:1 competing:1 retargeted:6 lange:1 idea:3 computable:1 texas:2 administered:1 motivated:2 handled:3 bartlett:1 penalty:2 generally:2 clear:1 involve:1 nonparametric:1 processed:1 argminz:1 http:3 sl:2 nsf:1 neuroscience:4 arising:1 bulk:1 discrete:1 key:2 four:1 drawn:2 imputation:1 acharyya:4 verified:2 rmc:3 imaging:2 graph:4 asymptotically:1 monotone:7 fraction:2 package:1 parameterized:1 uncertainty:1 extends:1 almost:1 wu:1 separation:1 scaling:3 bound:5 suriya:2 distinguish:1 koren:2 fold:2 topological:1 encountered:1 identifiable:1 activity:2 annual:1 constraint:10 n3:2 hy:1 tal:1 aspect:1 misguided:1 min:2 cofi:3 structured:1 spearman:1 across:4 pan:1 partitioned:1 kakade:1 taken:1 xk22:1 paramter:1 turn:1 discus:1 xi2:2 ordinal:2 zk2:1 generalizes:1 available:1 observe:1 appropriate:2 magnetic:2 denotes:6 remaining:2 include:3 subsampling:2 top:2 maintaining:2 unifying:1 unsuitable:1 exploit:4 k1:2 classical:1 bl:1 objective:1 quantity:1 parametric:1 surrogate:2 obermayer:1 affinity:6 lends:1 gradient:5 individualized:2 link:1 entity:17 recsys:2 considers:2 collected:1 trivial:1 cofirank:2 besides:2 length:3 index:8 pointwise:5 code:1 providing:1 equivalently:2 neuroimaging:2 potentially:2 blockwise:5 synthesizing:1 motivates:1 clickthrough:1 unknown:3 recommender:6 av:1 observation:13 datasets:2 urbana:1 discarded:1 acknowledge:1 descent:3 extended:1 communication:1 rn:31 arbitrary:10 xpk:1 rating:4 pair:1 specified:4 extensive:1 engine:1 errorbars:1 nm2:1 barcelona:1 nip:8 alternately:1 address:4 impair:1 beyond:1 trans:1 below:1 program:2 reliable:1 including:1 max:1 tau:2 terry:1 treated:1 ranked:1 predicting:1 indicator:1 representing:5 movie:1 axis:1 created:1 log1:1 coupled:1 focm:1 ddag:5 kj:1 text:3 literature:5 voxels:6 python:1 kf:1 val:1 sg:1 relative:4 loss:7 fully:1 permutation:1 limitation:1 filtering:5 acyclic:3 validation:2 foundation:2 consistent:5 principle:2 thresholding:4 austin:2 summary:1 free:2 retargeting:6 bias:2 burges:1 characterizing:1 taking:1 listwise:13 van:1 feedback:5 default:1 boundary:1 evaluating:1 valid:1 genome:1 kz:2 author:2 commonly:1 collection:4 forward:2 replicated:1 far:1 voxel:1 transaction:1 approximate:1 observable:1 monotonicity:2 overfitting:1 uai:2 active:1 assumed:5 xi:6 continuous:1 latent:2 search:2 quantifies:2 iterative:1 table:6 promising:1 channel:1 learn:1 robust:2 kanade:1 permute:1 complex:1 necessarily:1 elegantly:1 pk:10 yi2:1 montanari:1 weimer:2 dense:2 bounding:1 noise:6 arise:1 deviant:1 n2:2 hyperparameters:1 dyadic:1 scrape:1 xu:1 fig:1 deployed:1 precision:2 explicit:3 candidate:1 tied:1 jmlr:2 minz:1 kop:1 theorem:2 removing:1 specific:1 list:5 sims:4 incorporating:2 mendelson:1 quantization:2 importance:1 nat:1 margin:11 rankness:1 chen:2 mildly:1 kx:1 rd1:4 generalizing:1 intersection:1 logarithmic:1 simply:1 univariate:1 kxk:3 ordered:5 partially:1 recommendation:5 monotonic:13 springer:1 truth:2 violator:1 minimizer:1 extracted:1 acm:3 ma:1 sanmi:1 shared:2 lipschitz:2 hard:1 included:1 specifically:1 movielens:6 decouple:1 lemma:2 zb:3 total:18 ece:1 experimental:1 invariance:1 arises:1 relevance:2 tsai:1 incorporate:1 evaluate:2 d1:7 |
5,828 | 6,273 | Fundamental Limits of Budget-Fidelity Trade-off in
Label Crowdsourcing
Farshad Lahouti
Electrical Engineering Department, California Institute of Technology
[email protected]
Babak Hassibi
Electrical Engineering Department, California Institute of Technology
[email protected]
Abstract
Digital crowdsourcing (CS) is a modern approach to perform certain large projects
using small contributions of a large crowd. In CS, a taskmaster typically breaks
down the project into small batches of tasks and assigns them to so-called workers
with imperfect skill levels. The crowdsourcer then collects and analyzes the results
for inference and serving the purpose of the project. In this work, the CS problem,
as a human-in-the-loop computation problem, is modeled and analyzed in an
information theoretic rate-distortion framework. The purpose is to identify the
ultimate ?delity that one can achieve by any form of query from the crowd and any
decoding (inference) algorithm with a given budget. The results are established
by a joint source channel (de)coding scheme, which represent the query scheme
and inference, over parallel noisy channels, which model workers with imperfect
skill levels. We also present and analyze a query scheme dubbed k-ary incidence
coding and study optimized query pricing in this setting.
1
Introduction
Digital crowdsourcing (CS) is a modern approach to perform certain large projects using small
contributions of a large crowd. Crowdsourcing is usually used when the tasks involved may better suite
humans rather than machines or in situations where they require some form of human participation.
As such crowdsourcing is categorized as a form of human-based computation or human-in-the-loop
computation system. This article examines the fundamental performance limits of crowdsourcing
and sheds light on the design of optimized crowdsourcing systems.
Crowdsourcing is used in many machine learning projects for labeling of large sets of unlabeled
data and Amazon Mechanical Turk (AMT) serves as a popular platform to this end. Crowdsourcing
is also useful in very subjective matters such as rating of different goods and services, as is now
widely popular in different online rating platforms and applications such as Yelp. Another example is
if we wish to classify a large number of images as suitable or unsuitable for children. In so-called
citizen research projects, a large number of ?often human deployed or operated? sensors contribute
to accomplish a wide array of crowdsensing objectives, e.g., [2] and [3].
In crowdsourcing, a taskmaster typically breaks down the project into small batches of tasks, recruits
so-called workers and assigns them the tasks accordingly. The crowdsourcer then collects and
analyzes the results collectively to address the purpose of the project. The worker?s pay is often
low or non-existent. In cases such as labeling, the work is typically tedious and hence the workers
usually handle only a small batch of work in a given project. The workers are often non-specialists
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(b)
(a)
(c)
Figure 1: (a): An information theoretic crowdsourcing model; (b) 3IC for N = 2 valid responses, (c)
invalid responses
and as such there may be errors in their completion of assigned tasks. Due to the nature of task
assignment by the taskmaster, the workers and the skill level are typically unknown a priori. In case
of rating systems, such as Yelp, there is no pay for regular reviewers but only non-monetary personal
incentives; however, there are illegal reviewers who are paid to write fake reviews. In many cases
of crowdsourcing, the ground truth is not known at all. The transitory or ?eeting characteristic of
workers, their unknown and imperfect skill levels and their possible motivations for spamming makes
the design of crowdsourcing projects and the analysis of the obtained results particularly challenging.
Researchers have studied the optimized design of crowdsourcing systems within the setting described
for enhanced reliability. Most research reported so far is devoted to optimized design and analysis
of aggregation and inference schemes and possibly using redundant task assignment. In AMT-type
crowdsourcing, two popular approaches for aggregation are namely majority voting and the Dawid
and Skene (DS) algorithm [5]. The former sets the estimate based on what the majority of the crowd
agrees on, and is provably suboptimal [8]. Majority voting is susceptible to error, when there are
spammers in the crowd, as it weighs the opinion of everybody in the crowd the same. The DS
algorithm, within a probabilistic framework, aims at joint estimation of the workers? skill levels
and a reliable label based on the data collected from the crowd. The scheme runs as an expectation
maximization (EM) formulation in an iterative manner. More recent research with similar EM
formulation and a variety of probabilistic models are reported in [9, 12, 10]. In [8], a label inference
algorithm for CS is presented that runs iteratively over a bipartite graph. In [1], the CS problem
is posed as a so-called bandit survey problem for which the trade offs of cost and reliability in the
context of worker selection is studied. Schemes for identifying workers with low skill levels is studied
in, e.g., [6]. In [13], an analysis of the DS algorithm is presented and an improved inference algorithm
is presented. Another class of works on crowdsourcing for clustering relies on convex optimization
formulations of inferring clusters within probabilistic graphical models, e.g., [11] and the references
therein.
In this work, a crowdsourcing problem is modeled and analyzed in an information theoretic setting.
The purpose is to seek ultimate performance bounds, in terms of the CS budget (or equivalently the
number of queries per item) and a CS ?delity, that one can achieve by any form of query from the
workers and any inference algorithm. Two particular scenarios of interest include the case where the
workers? skill levels are unknown both to the taskmaster and the crowdsourcer, or the case where
the skill levels are perfectly estimated during inference by the crowdsourcer. Within the presented
framework, we also investigate a class of query schemes dubbed k-ary incidence coding and analyze
its performance. At the end, we comment on an associated query pricing strategy.
2
Modeling Crowdsourcing
In this Section, we present a communication system model for crowdsourcing. The model, as depicted
in Figure 1a, then enables the analysis of the fundamental performance limits of crowdsourcing.
2
2.1
Data Set: Source
Consider a dataset X = {X1 , . . . , XL } composed of L items, e.g., images. In practice, there is
certain function B(X) ? B(X ) of the items that is of interest in crowdsourcing and is here considered
as the source. The value of this function is to be determined by the crowd for the given dataset. In the
case of crowdsourced clustering, B(Xi ) = Bj ? B(X ) = {B1 , . . . , BN } indicates the bin or cluster
to which the item Xi ideally belongs. We have B(X1 , . . . , Xn ) = B(X n ) = (B(X1 ), . . . , B(Xn )).
The number of clusters, |B(X )| = N , may or may not be known a priori.
2.2
Crowd Workers: Channels
The crowd is modeled by a set of parallel noisy channels in which each channel Ci , i = 1, . . . , W,
represents the ith worker. The channel input is a query that is designed based on the source. The
channel output is what a user perceives or responds to a query. The output may or may not be the
correct answer to the query depending on the skill level of the worker and hence the noisy channel is
meant to model possible errors by the worker.
A suitable model for Ci is a discrete channel model. The channels may be assumed independent,
on the basis that different individuals have different knowledge sets. Related probabilistic models
representing the possible error in completion of a task by a worker are reviewed in [8]. Formally, a
channel (worker) is represented by a probability distribution P (v|u), u ? U, v ? V, where U is the
set of possible responses to a query and V is the set of choices offered to the worker in responding
to a query. For the example of images suitable for children, in general we may consider a shade of
possible responses to the query, U , including the extremes of totally suitable and totally unsuitable; As
possible choices offered to the worker to answer the query, V, we may consider two options of suitable
and unsuitable. As described below, in this work we consider two channel models representing
possibly erroneous responses of the workers: an M -ary symmetric channel model (MSC) and a
spammer-hammer channel model (SHC).
An MSC model with parameter , is a symmetric discrete memoryless channel without feedback [4]
and with input u ? U and output v ? V (|U| = |V| = M ), that is characterized by the following
transition probability
v = u
M ?1
P (v|u) =
(1)
1?
v = u.
n
If we consider a sequence of channel
n inputs u = (u1 , . . . , un ) and the corresponding output
n
n n
sequence v , we have P (v |u ) = i=1 P (vi |ui ), which holds because of the memoryless and no
feedback assumptions. In case of clustering and MSC, the probability of misclassifying any input
from a given cluster in another cluster only depends on the worker and not the corresponding clusters.
In the spammer-hammer channel model with the probability of being a hammer of q (SHC(q)), a
spammer randomly and uniformly chooses a valid response to a query, and a hammer perfectly
answers the query [8]. The corresponding discrete memoryless channel model without feedback,
with input u ? U and output v ? V and state C ? {S, H}, P (C = H) = q is described as follows
?
? 0
1
P (v|u, C) =
? 1
C = H and v =
u
C = H and v = u
C=S
|V|
(2)
where C ? {S, H} indicates whether the worker (channel) is a spammer
n or a hammer. In the case of
our current interest |U| = |V| = M , and we have P (v n |un , C n ) = i=1 P (vi |ui , C i ).
In the sequel, we consider the following two scenarios: when the workers? skill levels are unknown
(SL-UK) and when it is perfectly known by the crowdsourcer (SL-CS). In both cases, we assume that
the skill levels are not known at the taskmaster (transmitter).
The presented framework can also accommodate other more general scenarios of interest. For
example, the feedforward link in Figure 1a could be used to model a channel whose state is affected
by the input, e.g., dif?culty of questions. These extensions remain for future studies.
3
2.3
Query Scheme and Inference: Coding
In the system model presented in Figure 1a, encoding shows the way the queries are posed. A basic
query is that the worker is asked of the value of B(X). In the example of crowdsourcing for labeling
images that are suitable for children, the query is "This image suits children; true or false?" The
decoder or the crowdsourcer collects the responses of workers to the queries and attempts to infer the
right label (cluster) for each of the images. This is while the collected responses could be in general
incomplete or erroneous.
In the case of crowdsourcing for labeling a large set of dog images with their breeds, a query may be
formed by showing two pictures at once and inquiring whether they are from the same breed [11].
The queries in fact are posed as showing the elements of a binary incidence matrix, A, whose rows
and columns correspond to X. In this case, A(X1 , X2 ) = 1 indicates that the two are members of
the same cluster (breed) and A(X1 , X2 ) = 0 indicates otherwise. The matrix is symmetric and its
diagonal is 1. We refer to this query scheme as Binary Incidence Coding. If we show three pictures
at once and ask the user to classify them (put the pictures in similar or distinct bins); it is as if we
ask about three elements of the same matrix, i.e., A(X1 , X2 ), A(X1 , X3 ) and A(X2 , X3 ) (Ternary
Incidence Coding). In general, if we show k pictures as a single query, it is equivalent to inquiring
about C(k, 2) (choose 2 out of k elements) entries of the matrix (k-ary Incidence Coding or kIC). As
we elaborate below, out of the 2C(k,2) possibilities, a number of the choices remain invalid and this
provides an error correction capability for kIC.
Figures 1b and 1c show the graphical representation of 3IC, and the choices a worker would have in
clustering with this code. The nodes denote the items and the edges indicate whether they are in the
same cluster. In 3IC, if X1 and X2 are in the same cluster as X3 , then all three of them are in the
same cluster. It is straightforward to see that in 3IC and for N = 2, we only have four valid responses
(Figure 1b) to a query as opposed to 2C(3,2) = 8. The ?rst item in Figure 1c is invalid because there
are only two clusters (N = 2) (in case we do not know the number of clusters or N ? 3, then this
would remain a valid response). In this setting, the encoded signal u can be one of the four valid
symbols in the set U ; and similarly what the workers may select v (decoded signal over the channel)
is from the set V, where U = V. As such, since in kIC the obviously erroneous answers are removed
from the choices a worker can make in responding to a query, one expects an improved overall CS
performance, i.e., an error correction capability for kIC. In Section 4, we study the performance of
this code in greater details. Note that in clustering with kIC (k ? 2) described above, the code would
identify clusters up to their speci?c labellings.
While we presented kIC as a concrete example, there may be many other forms of query or coding
schemes. Formally, the code is composed of encoder and decoder mappings:
n
C : B(X ) ? U
W
i=1
mi
C : V
,
W
i=1
mi
n
? ) ,
? B(X
(3)
where n is the block size or number of items in each encoding (we assume n|L), and mi is the number
of uses of channel Ci or queries that worker i, 1 ? i ? W, handles. In many practical cases of
? ) = B(X ) and we may have n = L. The rate of the code is R = W mi /n
interest, we have B(X
i=1
? n ).
queries per item. In this setting, C (C(B(X n ))) = B(X
Depending on the availability of feedback from the decoder to the encoder, the code can adapt for
optimized performance. The feedback could provide the encoder with the results of prior queries.
We here focus on non-adaptive codes in (3) that are static and remain unchanged over the period of
crowdsourcing project. We will elaborate on code design in Section 3.
Depending on the type of code in use and the objectives of crowdsourcing, one may design different
decoding schemes. For instance, in the simple case of directly inquiring workers about the function
B(Xi ), with multiple queries for each item, popular approaches are majority voting, and EM style
decoding due to Dawid and Skene [5], where it attempts to jointly estimate the workers? skill
levels and decode B(X). In the case of clustering with 2IC, an inference scheme based on convex
optimization is presented in [11].
The rate of the code is proportional to the CS budget and we use the rate as a proxy for budget
throughout this analysis. However, since different types of query have different costs both ?nancially
(in crowdsourcing platforms) and from the perspective of time or effort it takes from the user to
process it, one needs to be careful in comparing the results of different coding schemes. We shall
elaborate on this issue for the case of kIC in Appendix E.
4
2.4
Distortion and the Design Problem
In the framework of Figure 1a, we are interested to design the CS code, i.e., the query and inference
schemes, such that with a given budget a certain CS ?delity is optimized. We consider the ?delity as
?
an average distortion with respect to the source (dataset). For a distance function d(B(x), B(x)),
for
n
1
n
n
?
? i )), the average distortion is
which d(B(x ), B(x
)) = n i=1 d(B(xi ), B(x
?
D(B(X), B(X))
=
=
? n ))
Ed(B(X n ), B(X
n
n
n
? n
? n
X n P (B(X ))P (B(X )|B(X ))d(B(X ), B(X )),
(4)
where P (B(X n )) = P (B(X))n for iid B(X). The design problem is therefore one of CS ?delityquery budget optimization (or distortion-rate, D(R), optimization) and may be expressed as follows
D? (Rt ) =
min
C,C ,R?Rt
?
D(B(X), B(X))
(5)
where Rt is a target rate or query budget. The optimization is with respect to the coding and
decoding schemes, the type of feedback (if applicable), and query assignment and rate allocation.
The optimum solution to the above problem is referred to as the distortion-rate function, D? (Rt ) (or
CS ?delity-query budget function). A basic distance function, for the case where B(X) is discrete,
?
?
is l0 (B(X), B(X)),
or the Hamming distance. In this case, the average distortion D(B(X), B(X))
re?ects the average probability of error. As such, the D(R) optimization problem may be rewritten
as follows
?
D? (Rt ) = min
P (E : B(X)
= B(X)).
(6)
t
C,C ,R?R
In case of crowdsourcing for clustering, this quanti?es the performance in terms of the overall
probability of error in clustering. For other crowdsourcing problems, we may consider other distortion
functions. Equivalently, we may consider minimizing the rate subject to a constrained distortion in
crowdsourcing. The R(D) problem is expressed as follows
R? (Dt ) =
min
t
?
C,C ,D(B(X),B(X))?D
R=
min
t
?
C,C ,D(B(X),B(X))?D
W
mi /n
(7)
i=1
where Dt is a target distortion or average probability of error. The optimum solution to the above
problem is referred to as the rate-distortion function, R? (Dt ) (CS query budget-?delity function). In
case, the taskmaster does not know the skill level of the workers, different users -disregarding their
skill levels- would receive the same number of queries (mi = m , ?i); and the code design involves
designing the query and inference schemes.
3
Information Theoretic CS Budget-Fidelity Limits
In the CS budget-?delity optimization problem in (5), the code providing the optimized solution
indeed needs to balance two opposing design criteria to meet the target CS ?delity: On one hand the
design aims at ef?ciency of the query and making as small number of queries as possible; On the
other hand, the code needs to take into account the imperfection of worker responses and incorporate
suf?cient redundancy. In information theory (coding theory) realm, the former corresponds to source
coding (compression) and the latter corresponds to channel coding (error control coding) and coding
to serve both purposes is a joint source channel code.
In this Section, we ?rst present a brief overview on joint source channel coding and related results
in information theory. Next, we present the CS budget-?delity function in two cases of SL-UK and
SL-CS described in Section 2.2.
3.1
Background
Consider the communication of a random source Z from a ?nite alphabet Z over a discrete memoryless channel. The source is ?rst processed by an encoder C and whose output is communicated
over the channel. The channel output is processed by a decoder C , which reconstructs the source as
?
Z? ? Z? and we often have Z = Z.
5
From a rate-distortion theory perspective, we ?rst consider the case where the channel is error free.
The source is iid distributed with probability mass function P (Z) and based on Shannon?s source
coding theorem is characterized by a rate-distortion function,
R? (Dt ) =
min
t
?
C,C :D(Z,Z)?D
?
I(Z, Z),
(8)
where I(., .) indicates mutual information between two random variables. The source coding is
de?ned by the following two mappings:
C : {1, . . . , 2nR } ? Z?n
C : Z n ? {1, . . . , 2nR },
(9)
The average distortion is de?ned in (4) and Dt is the target performance. The optimization in source
coding with distortion is with respect to the source coding or compression scheme, that is described
?
probabilistically as P (Z|Z)
in information theory. The proof of the source coding theorem follows
in two steps: In the ?rst step, we prove that any rate R ? R? (Dt ) is achievable in the sense that there
exists a family (as a function of n) of codes {Cn , Cn } for which as n grows to in?nity the resulting
average distortion satis?es the desired constraint. In the second step or the converse, we prove that
any code with rate R < R? (Dt ) results in an average distortion that violates the desired constraint.
This establishes the described rate-distortion function as the fundamental limit for lossy compression
of a source with a desired maximum average distortion.
From the perspective of Shannon?s channel coding theorem, we consider the source as an iid uniform
source and the channel as a discrete memoryless channel characterized by P (V |U ), where U ? U is
the channel input and, V ? V is the channel output. The channel coding is de?ned by the following
two mappings:
C : V n ? {1, . . . , |Z|}
(10)
C : {1, . . . , |Z|} ? U n
?
The theorem establishes the capacity of the channel as C = maxC,C I(Z, Z) and states that for a
rate R, there exists a channel code that provides a reliable communication over the noisy channel if
and only if R ? C. Again the proof follows in two steps: First, we establish achievability, i.e., we
show that for any rate R ? C, there exists a family of codes (as a function of length n) for which
the average probability of error P (Z? = Z) goes to zero as n grows to in?nity. Next, we prove the
converse, i.e., we show that for any rate R > C, the probability of error is always greater than zero
and grows exponentially fast to 1/2 as R goes beyond C. This establishes the described capacity as
the fundamental limit for transmission of an iid uniform source over a discrete memoryless channel.
For the problem of our interest, i.e., the transmission of an iid source (not necessarily uniform)
over a discrete memoryless channel, the joint source channel coding theorem, aka source-channel
separation theorem, is instrumental. The theorem states that in this setting a code exists that can
? ? Dt if and only if R? (Dt ) < C. For
facilitate reconstruction of the source with distortion D(Z, Z)
completeness, we reproduce the theorem form [4] below.
Theorem 1 Let Z be a ?nite alphabet iid source which is encoded as a sequence of n input symbols
U n of a discrete memoryless channel with capacity C. The output of the channel V n is mapped onto
the reconstruction alphabet Z? n = C (V n ). Let D(Z n , Z? n ) be the average distortion achieved by this
joint source and channel coding scheme. Then distortion D is achievable if and only if C > R? (Dt ).
The proof follows a similar two step approach described above and assumes large block length
(n ? ?). The result is important from a communication theoretic perspective as a concatenation
of a source code, which removes the redundancy and produces an iid uniform output at a rate
R > R? (Dt ), and a channel code, which communicates this reliably over the noisy channel at a rate
R < C, can achieve the same fundamental limit.
3.2
Basic Information Theoretic Bounds
We here consider crowdsourcing within the presented framework, and derive basic information
theoretic bounds. Following Section 2.1, we examine the case where a large dataset X (L ? ?) and
a function of interest B(X) with associated probability mass function, P (B(X)), are available.
We consider the MSC worker pool model described in Section 2.2, where the skill set of workers are
from a discrete set E = {1 , 2 , . . . , W } with probability P (), ? E. The number of workers in
each skill level class is assumed large. We here study the two scenarios of SL-UK and SL-CS.
6
At any given instance, a query is posed to a random worker with a random skill level within the set, E.
We assume there is no feedback available from the decoder (non-adaptive coding) and the queries do
not in?uence the channel probabilities (no feedforward). Extensions remain for future work.
The following theorem identi?es the information theoretic minimum number of queries per item to
perform at least as good as a target ?delity in case the skill levels are not known (SL-UK). The bound
is oblivious to the type of code used and serves as an ultimate performance bound.
Theorem 2 In crowdsourcing for a large dataset of N -ary discrete source B(X) ? P (B(X)) with
Hamming distortion, when a large number of unknown workers with skill levels ? E, ? P ()
from an MSC population participate (SL-UK), the minimum number of queries per item to obtain an
overall error probability of at most ?, is given by
Rmin =
H(B(X))?HN (?
)
log2 M ?HM (E())
? ? min{1 ? pmax , 1 ?
otherwise,
0
1
N}
(11)
in which HN () H(1 ? , /(N ? 1), . . . , /(N ? 1)), and pmax = maxB(X)?B(X ) P (B(X)).
The proof is provided in Appendix A. Another interesting scenario is when the crowdsourcer attempts
to estimate the worker skill levels from the data it has collected as part of the inference. In case this
estimation is done perfectly, the next theorem identi?es the corresponding fundamental limit on the
crowdsourcing rate. The proof is provided in Appendix B.
Theorem 3 In crowdsourcing for a large dataset of N -ary discrete source B(X) ? P (B(X)) with
Hamming distortion, when a large number of workers with skill levels ? E, ? P () -known to the
crowdsourcer (SL-CS)- from an MSC population participate, the minimum number of queries per
item to obtain an overall error probability of at most ?, is given by
Rmin =
H(B(X))?HN (?
)
log2 M ?E(HM ())
? ? min{1 ? pmax , 1 ?
otherwise.
0
1
N}
(12)
Comparing the results in Theorems 2 and 3 the following interesting observation can be made. In case
the worker skill levels are unknown, the CS system provides an overall work quality (capacity) of an
average worker; whereas when the skill levels are known at the crowdsourcer, the system provides an
overall work quality that pertains to the average of the work quality of the workers.
4
k-ary Incidence Coding
In this Section, we examine the performance of the k-ary incidence coding introduced in Section 2.3.
The k-ary incidence code poses a query as a set of k ? 2 items and inquires the workers to identify
those with the same label. In the sequel, we begin with deriving a lower-bound on the performance
of kIC with a spammer-hammer worker pool. We then presents numerical results along with the
information theoretic lower bounds presented in the previous Section.
4.1
Performance of kIC with SHC Worker Pool
We consider kIC for crowdsourcing in the following setting. The items X in the dataset are iid
with N = 2. There is no feedback from the decoder to the task manager (encoder), i.e., the code
is non-adaptive. Since the task manager has no knowledge of the workers? skill levels, it queries
the workers at the same ?xed rate of R queries per item. To compose a query, the items are drawn
uniformly at random from the dataset. We assume that the workers are drawn from the SHC(q) model
elaborated in Section 2.2. The purpose is to obtain a lower-bound on the performance assuming an
Oracle decoder that can perfectly identify the workers? skill levels (here a spammer or a hammer)
and perform an optimal decoding. Speci?cally, we consider the following:
?
min P (E : B(X)
= B(X))
C ,C:kIC
(13)
where minimization is with respect to the choice of a decoder for a given kIC code. We note that the
code length is governed by how the decoder operates, and often could be as long as the dataset. As
7
evident in (2), in the SHC model, the channel error rate (worker reliability) is explicitly in?uenced by
the code and parameter, k. In the model of Figure 1a, this implies that a certain static feedforward
exists in this setting. We ?rst present a lemma, which is used later to establish a Theorem 4 on kIC
performance. The proofs are respectively provided in Appendix C and Appendix D.
Lemma 1 In crowdsourcing for binary labeling (N = 2) of a uniformly distributed dataset, with
kIC and a SHC worker pool, the probability of error in labeling of an item by a spammer (C = S), is
given by
?
(k?1)/2
k
?
1
?
i
?
k odd
? k2k?1
i=0
?
i
= B(X)|C = S) =
?S = P (E : B(X)
(k?1)/2
k
k
?
1
?
i?
+ k4
k even.
? k2k?1
i=0
i
k/2
Theorem 4 Assuming crowdsourcing using a non-adaptive kIC over a uniformly distributed dataset
1
(k ? 2), if the number of queries per item, R, is less than k ln(1?q)
ln ??S , then no decoder can achieve
an average probability of labeling error less than ? for any L under the SHC(q) worker model.
To interpret and use the result in Theorem 4, we consider the following points: (i) The theorem
presents a necessary condition, i.e., the minimum rate (budget) requirement identi?ed here for kIC
with a given ?delity is a lower bound. This is due to the fact that we are considering an oracle CS
decoder that can perfectly identify the workers? skill levels and correctly label the item if the item
is at least labeled by one hammer out of R times it is processed by the workers. (ii) In the current
setting, where the taskmaster does not know the workers? skill levels, each item is included in exactly
R ? Z+ k-ary queries. That is due to the nature of the code R . (iii) As discussed in Appendix
E, Theorem 4 can also be used to establish an approximate rule of thumb for pricing. Speci?cally,
k1
1)
considering two query schemes k1 IC and k2 IC, the query price ? is to be set as ?(k
?(k2 ) ? k2 .
4.2
Numerical Results
To obtain an information theoretic benchmark, the next corollary specializes Theorem 3 to the setting
of interest in this Section.
Corollary 1 In crowdsourcing for binary labeling of a uniformly distributed dataset with a SHC(q)
worker pool -known to the crowdsourcer (SL-CS)- and number of choices in responding to a query of
M , the minimum rate for any given coding scheme to obtain a probability of error of at most ?, is
1?Hb (?)
? ? 0.5
q log2 M , 0 ?
Rmin =
queries per item
(14)
0
otherwise.
Figure 2 shows the information theoretic limit of Corollary 1 and the bound obtained in Theorem 4.
For rates (budgets) greater than the former bound, there exist a code which provides crowdsourcing
with the desired ?delity; and for rates below this bound no such code exists. The coding theoretic
lower bounds for kIC depend on k, q and ?delity, and improve as k and q grow. The kIC bounds for
k = 1 is equivalent to the analysis leading to Lemma 1 of [8].
8
Figure 2: kIC performance bound and the information theoretic limit
References
[1] I. Abraham, O. Alonso, V. Kandylas, and A. Slivkins. Adaptive crowdsourcing algorithms for
the bandit survey problem. In 26th Conference on Learning Theory (COLT), 2013.
[2] Audubon. History of the christmas bird count, 2015. URL http://birds.audubon.org.
[3] Caltech. Community seismic network project, 2016. URL http://csn.caltech.edu.
[4] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiely, New Jersey, USA,
2006.
[5] A. P. Dawid and A. M. Skene. Maximum Likelihood Estimation of Observer Error-Rates Using
the EM Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):
20?28, 1979.
[6] O. Dekel and O. Shamir. Vox populi: Collecting high-quality labels from a crowd. In Proceedings of the Twenty-Second Annual Conference on Learning Theory, June 2009.
[7] A El Gamal and Y.-H. Kim. Network Information Theory. Cambridge University Press, New
York, USA, 2011.
[8] D. R. Karger, S. Ohy, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing
systems. Operations Research, 61(1):1?24, 2014.
[9] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning
from crowds. J. Machine Learn. Res., 99(1):1297?1322, 2010.
[10] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective
labeling of venus images. Adv. Neural Inform. Processing Systems, pages 1085?1092, 1995.
[11] R. Korlakai Vinayak, S. Oymak, and B. Hassibi. Graph clustering with missing data: Convex
algorithms and analysis. In Advances in Neural Information Processing Systems 27: Annual
Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, pages
2996?3004, 2014.
[12] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more:
optimal integration of labels from labelers of unknown expertise. Adv. Neural Inform. Processing
Systems, 22(1):2035?2043, 2009.
[13] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal
algorithm for crowdsourcing. In Advances in Neural Information Processing Systems 27:
Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada,
pages 1260?1268, 2014.
9
| 6273 |@word achievable:2 compression:3 instrumental:1 dekel:1 tedious:1 seek:1 bn:1 paid:1 accommodate:1 series:1 karger:1 subjective:2 csn:1 current:2 comparing:2 incidence:9 numerical:2 enables:1 remove:1 designed:1 item:22 accordingly:1 ruvolo:1 ith:1 provides:5 completeness:1 contribute:1 node:1 org:1 zhang:1 along:1 ect:1 prove:3 compose:1 baldi:1 manner:1 indeed:1 examine:2 manager:2 considering:2 totally:2 perceives:1 project:12 spain:1 provided:3 begin:1 gamal:1 mass:2 what:3 xed:1 recruit:1 dubbed:2 suite:1 collecting:1 voting:3 shed:1 exactly:1 k2:3 uk:5 control:1 converse:2 service:1 engineering:2 limit:10 yelp:2 encoding:2 meet:2 bird:2 therein:1 studied:3 collect:3 challenging:1 dif:1 practical:1 ternary:1 practice:1 block:2 movellan:1 x3:3 communicated:1 nite:2 illegal:1 regular:1 onto:1 unlabeled:1 selection:1 put:1 context:1 equivalent:2 reviewer:2 missing:1 straightforward:1 go:2 convex:3 survey:2 amazon:1 identifying:1 assigns:2 examines:1 rule:1 array:1 deriving:1 population:2 handle:2 uenced:1 enhanced:1 target:5 shamir:1 user:4 decode:1 smyth:1 us:1 designing:1 spamming:1 dawid:3 element:4 particularly:1 labeled:1 electrical:2 adv:2 trade:2 removed:1 ui:2 ideally:1 asked:1 babak:1 personal:1 existent:1 depend:1 serve:1 bipartite:1 basis:1 joint:6 represented:1 jersey:1 alphabet:3 distinct:1 fast:1 query:58 labeling:9 crowd:12 whose:4 encoded:2 widely:1 posed:4 distortion:24 otherwise:4 encoder:5 statistic:1 breed:3 jointly:1 noisy:5 online:1 obviously:1 sequence:3 reconstruction:2 loop:2 monetary:1 culty:1 nity:2 achieve:4 rst:6 cluster:14 optimum:2 transmission:2 requirement:1 produce:1 depending:3 derive:1 completion:2 pose:1 montreal:2 odd:1 c:26 involves:1 indicate:1 implies:1 correct:1 hammer:8 human:6 vox:1 opinion:1 violates:1 bin:2 require:1 extension:2 correction:2 hold:1 considered:1 ic:7 ground:2 mapping:3 bj:1 purpose:6 estimation:3 applicable:1 label:8 agrees:1 establishes:3 minimization:1 offs:1 sensor:1 imperfection:1 always:1 aim:2 rather:1 zhou:1 probabilistically:1 corollary:3 l0:1 focus:1 june:1 transmitter:1 indicates:5 likelihood:1 aka:1 kim:1 sense:1 inference:13 el:1 typically:4 perona:1 bandit:2 reproduce:1 interested:1 provably:2 lahouti:2 issue:1 overall:6 fidelity:2 colt:1 priori:2 platform:3 constrained:1 integration:1 mutual:1 once:2 represents:1 yu:1 future:2 oblivious:1 modern:2 randomly:1 composed:2 individual:1 opposing:1 suit:1 attempt:3 interest:8 satis:1 investigate:1 possibility:1 analyzed:2 extreme:1 operated:1 light:1 devoted:1 citizen:1 edge:1 worker:59 necessary:1 incomplete:1 re:2 desired:4 weighs:1 uence:1 instance:2 classify:2 modeling:1 column:1 korlakai:1 cover:1 vinayak:1 assignment:3 maximization:1 cost:2 entry:1 expects:1 uniform:4 reported:2 answer:4 accomplish:1 chooses:1 fundamental:7 oymak:1 sequel:2 probabilistic:4 off:1 decoding:5 pool:5 concrete:1 again:1 opposed:1 choose:1 possibly:2 reconstructs:1 hn:3 zhao:1 style:1 leading:1 valadez:1 account:1 de:4 coding:30 availability:1 matter:1 explicitly:1 vi:2 depends:1 later:1 break:2 observer:1 analyze:2 aggregation:2 crowdsourced:1 parallel:2 option:1 capability:2 elaborated:1 contribution:2 formed:1 who:1 characteristic:1 correspond:1 identify:5 thumb:1 iid:8 expertise:1 researcher:1 ary:10 history:1 maxc:1 inform:2 ed:2 involved:1 turk:1 associated:2 mi:6 proof:6 static:2 hamming:3 dataset:12 popular:4 ask:2 kic:19 knowledge:2 realm:1 dt:11 response:11 improved:2 populi:1 formulation:3 done:1 transitory:1 msc:6 taskmaster:7 d:3 hand:2 quality:4 pricing:3 grows:3 lossy:1 facilitate:1 usa:2 true:1 former:3 hence:2 assigned:1 burl:1 symmetric:3 iteratively:1 memoryless:8 during:1 raykar:1 everybody:1 criterion:1 evident:1 theoretic:13 image:8 ef:1 overview:1 exponentially:1 discussed:1 interpret:1 refer:1 cambridge:1 similarly:1 reliability:3 labelers:1 bergsma:1 recent:1 perspective:4 belongs:1 scenario:5 certain:5 binary:4 caltech:4 analyzes:2 greater:3 minimum:5 speci:3 redundant:1 period:1 signal:2 ii:1 multiple:1 infer:1 characterized:3 adapt:1 long:1 basic:4 expectation:1 represent:1 shc:8 achieved:1 receive:1 background:1 whereas:1 grow:1 source:30 comment:1 subject:1 member:1 quebec:2 jordan:1 feedforward:3 iii:1 maxb:1 hb:1 variety:1 florin:1 perfectly:6 suboptimal:1 imperfect:3 cn:2 venus:1 whether:3 ultimate:3 url:2 effort:1 moy:1 spammer:8 york:1 useful:1 fake:1 processed:3 http:2 sl:10 exist:1 misclassifying:1 estimated:1 per:8 correctly:1 serving:1 write:1 discrete:12 shall:1 incentive:1 affected:1 redundancy:2 four:2 drawn:2 k4:1 graph:2 run:2 throughout:1 family:2 wu:1 separation:1 appendix:6 bound:15 pay:2 oracle:2 annual:3 constraint:2 rmin:3 x2:5 u1:1 min:8 fayyad:1 skene:3 ned:3 department:2 remain:5 em:5 labellings:1 making:1 ln:2 count:2 know:3 serf:2 end:2 available:2 operation:1 rewritten:1 spectral:1 batch:3 specialist:1 shah:1 thomas:1 responding:3 clustering:9 include:1 assumes:1 graphical:2 log2:3 unsuitable:3 cally:2 k1:2 establish:3 society:1 unchanged:1 objective:2 question:1 strategy:1 rt:5 responds:1 diagonal:1 nr:2 distance:3 link:1 mapped:1 capacity:4 majority:4 decoder:11 concatenation:1 participate:2 alonso:1 collected:3 assuming:2 code:31 length:3 modeled:3 providing:1 minimizing:1 balance:1 equivalently:2 susceptible:1 whitehill:1 pmax:3 design:12 reliably:1 unknown:7 perform:4 seismic:1 twenty:1 observation:1 benchmark:1 situation:1 communication:4 community:1 canada:2 rating:3 introduced:1 namely:1 mechanical:1 dog:1 optimized:7 slivkins:1 california:2 identi:3 established:1 barcelona:1 nip:1 address:1 beyond:1 usually:2 below:4 reliable:3 including:1 royal:1 suitable:6 participation:1 representing:2 scheme:20 improve:1 technology:2 brief:1 picture:4 specializes:1 hm:2 review:1 prior:1 suf:1 interesting:2 proportional:1 allocation:2 digital:2 offered:2 proxy:1 article:1 row:1 achievability:1 free:1 institute:2 wide:1 distributed:4 feedback:8 xn:2 valid:5 transition:1 made:1 adaptive:5 far:1 approximate:1 skill:27 christmas:1 b1:1 assumed:2 xi:4 un:2 iterative:1 reviewed:1 channel:49 nature:2 learn:1 necessarily:1 quanti:1 k2k:2 motivation:1 abraham:1 child:4 categorized:1 x1:8 referred:2 cient:1 elaborate:3 deployed:1 hassibi:3 inferring:2 decoded:1 wish:1 ciency:1 xl:1 governed:1 communicates:1 down:2 theorem:21 erroneous:3 shade:1 showing:2 symbol:2 disregarding:1 exists:6 false:1 ci:3 budget:16 chen:1 depicted:1 bogoni:1 expressed:2 collectively:1 corresponds:2 truth:2 amt:2 relies:1 invalid:3 careful:1 price:1 included:1 determined:1 uniformly:5 operates:1 lemma:3 called:4 e:4 shannon:2 vote:1 formally:2 select:1 latter:1 meant:1 pertains:1 incorporate:1 crowdsourcing:41 |
5,829 | 6,274 | Generalized Correspondence-LDA Models (GC-LDA)
for Identifying Functional Regions in the Brain
Timothy N. Rubin
SurveyMonkey
Oluwasanmi Koyejo
Univ. of Illinois, Urbana-Champaign
Michael N. Jones
Indiana University
Tal Yarkoni
University of Texas at Austin
Abstract
This paper presents Generalized Correspondence-LDA (GC-LDA), a generalization
of the Correspondence-LDA model that allows for variable spatial representations
to be associated with topics, and increased flexibility in terms of the strength
of the correspondence between data types induced by the model. We present
three variants of GC-LDA, each of which associates topics with a different spatial
representation, and apply them to a corpus of neuroimaging data. In the context of
this dataset, each topic corresponds to a functional brain region, where the region?s
spatial extent is captured by a probability distribution over neural activity, and the
region?s cognitive function is captured by a probability distribution over linguistic
terms. We illustrate the qualitative improvements offered by GC-LDA in terms
of the types of topics extracted with alternative spatial representations, as well
as the model?s ability to incorporate a-priori knowledge from the neuroimaging
literature. We furthermore demonstrate that the novel features of GC-LDA improve
predictions for missing data.
1
Introduction
One primary goal of cognitive neuroscience is to find a mapping from neural activity onto cognitive
processes?that is, to identify functional networks in the brain and the role they play in supporting
macroscopic functions. A major milestone towards this goal would be the creation of a ?functionalanatomical atlas? of human cognition, where, for each putative cognitive function, one could identify
the regions and brain networks within the region that support the function.
Efforts to create such functional brain atlases are increasingly common in recent years. Most studies
have proceeded by applying dimensionality reduction or source decomposition methods such as
Independent Component Analysis (ICA) [4] and clustering analysis [9] to large fMRI datasets such
as the Human Connectome Project [10] or the meta-analytic BrainMap database [8]. While such
work has provided valuable insights, these approaches also have significant drawbacks. In particular,
they typically do not jointly estimate regions along with their mapping onto cognitive processes.
Instead, they first extract a set of neural regions (e.g., via ICA performed on resting-state data), and
then in a separate stage?if at all?estimate a mapping onto cognitive functions. Such approaches do
not allow information regarding cognitive function to constrain the spatial characterization of the
regions. Moreover, many data-driven parcellation approaches involve a hard assignment of each brain
voxel to a single parcel or cluster, an assumption that violates the many-to-many nature of functional
brain networks. Ideally, a functional-anatomical atlas of human cognition should allow the spatial
and functional correlates of each atom or unit to be jointly characterized, where the function of each
region constrains its spatial boundaries, and vice-versa.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In the current work, we propose Generalized Correspondence LDA (GC-LDA) ? a novel generalization of the Correspondence-LDA model [2] for modeling multiple data types, where one data type
describes the other. While the proposed approach is general and can be applied to a variety of data,
our work is motivated by its application to neuroimaging meta-analysis. To that end, we consider
several GC-LDA models that we apply to the Neurosynth [12] corpus, consisting of the document
text and neural activation data from a large body of neuroimaging publications. In this context, the
models extract a set of neural ?topics?, where each topic corresponds to a functional brain region. For
each topic, the model describes its spatial extent (captured via probability distributions over neural
activation) and cognitive function (captured via probability distributions over linguistic terms). These
models provide a novel approach for jointly identifying the spatial location and cognitive mapping of
functional brain regions, that is consistent with the many-to-many nature of functional brain networks.
Furthermore, to the best of our knowledge, one of the GC-LDA variants provides the first automated
measure of the lateralization of cognitive functions based on large-scale imaging data.
The GC-LDA and Correspondence-LDA models are extensions of Latent Dirichlet Allocation (LDA)
[3]. Several Bayesian methods with similarities (or equivalences) to LDA have been applied to
different types of neuroimaging data. Poldrack et al. (2012) used standard LDA to derive topics
from the text of the Neurosynth database and then projected the topics onto activation space based on
document-topic loadings [7]. Yeo et al. (2014) used a variant of the Author-Topic model to model the
BrainMap Database [13]. Manning et al. (2014) described a Bayesian method ?Topographic Factor
Analysis? to identify brain regions based on the raw fMRI images (but not text) extracted from a set
of controlled experiments, which can later be mapped on functional categories [5].
Relative to the Correspondence-LDA model, the GC-LDA model incorporates: (i) the ability to
associate different types of spatial distributions with each topic, (ii) flexibility in how strictly the
model enforces a correspondence between the textual and spatial data within each document, and (iii)
the ability to incorporate a-priori spatial structure, e.g., encouraging relatively homologous functional
regions located in each brain hemisphere. As we show, these aspects of GC-LDA have a significant
effect on the quality of the estimated topics, as well as on the models? ability to predict missing data.
2
Models
In this paper we propose a set of unsupervised generative models based on the Correspondence-LDA
model [2] that we use to jointly model text and brain activations from the Neurosynth meta-analytic
database [12]. Each of these models, as well as Correspondence-LDA, can be viewed as special cases
of a broader model that we will refer to as Generalized Correspondence-LDA (GC-LDA). In the
section below, we describe the GC-LDA model and its relationship to Correspondence-LDA. We
then detail the specific instances of the model that we use throughout the remainder of the paper. A
summary of the notation used throughout the paper is provided in Table 1.
2.1
Generalized Correspondence LDA (GC-LDA)
Each document d in the corpus is comprised of two types of data: a set of word tokens
(d) (d)
(d)
w1 , w2 , ..., w (d) consisting of unigrams and/or n-grams, and a set of peak activation to (d) (d) Nw (d)
(d)
(d)
kens x1 , x2 , ..., x (d) , where Nw and Nx are the number of word and activation tokens in
Nx
document d, respectively. In the target application, each token xi is a 3-dimensional vector corresponding to the peak activation coordinates of a value reported in fMRI publications. However, we
note that this model can be directly applied to other types of data, such as segmented images, where
each xi corresponds to a vector of real-valued features extracted from each image segment (c.f. [2]).
GC-LDA is described by the following generative process (depicted in Figure 1.A):
1. For each topic t ? 1, ..., T 1 :
(a) Sample a Multinomial distribution over word types ?(t) ? Dirichlet(?)
2. For each document d ? {1, ..., D}:
1
To make the model fully generative, one could additionally put a prior on the spatial distribution parameters
?(t) and sample them. For the purposes of the present paper we do not specify a prior on these parameters, and
therefore leave this out of the generative process.
2
Table 1: Table of notation used throughout the paper
Notation
wi , xi
(d)
(d)
Nw , Nx
D
T
R
zi
yi
z(d) , y(d)
YD
Ntd
ci
?(t)
?(t) , ? (t)
(t)
(t)
?r , ? r
?(t)
(t)
?w
(d)
?
(d)
?t
? (t)
(t)
?r
?, ?, ?
?
Model specification
Meaning
The ith word token and peak activation token in the corpus, respectively
The number of word tokens and peak activation tokens in document d, respectively
The number of documents in the corpus
The number of topics in the model
The number of components/subregions in each topic?s spatial distribution (subregions model)
Indicator variable assigning word token wi to a topic
Indicator variable assigning activation token xi to a topic
The set of all indicator variables for word tokens and activation tokens in document d
The number of activation tokens within document d that are assigned to topic t
Indicator variable assigning activation token yi to a subregion (subregion models)
Placeholder for all spatial parameters for topic t
Gaussian parameters for topic t
Gaussian parameters for subregion r in topic t (subregion models)
Multinomial distribution over word types for topic t
Probability of word type w given topic t
Multinomial distribution over topics for document d
Probability of topic t given document d
Multinomial distribution over subregions for topic t (subregion models)
Probability of subregion r given topic t (subregion models)
Model hyperparameters
Model hyperparameter (subregion models)
(a) Sample a Multinomial distribution over topics ?(d) ? Dirichlet(?)
(d)
(b) For each peak activation token xi , i ? 1, ..., Nx :
i. Sample indicator variable yi from Multinomial(?(d) )
ii. Sample a peak activation token xi from the spatial distribution: xi ? f (?(yi ) )
(d)
(c) For each word token wi , i ? 1, ..., Nw :
YD
YD
+?
N1d +?
N2d
i. Sample indicator variable zi from Multinomial (d)
, (d)
, ...,
Nx
+??T
Nx
+??T
YD
NT
d +?
(d)
Nx +??T
,
YD
where Ntd
is the number of activation tokens y in document d that are assigned to topic t,
(d)
Nx is the total number of activation tokens in d, and ? is a hyperparameter
ii. Sample a word token wi from Multinomial(?(zi ) )
Intuitively, in the present application of GC-LDA, each topic corresponds to a functional region of the
brain, where the linguistic features for the topic describe the cognitive processes associated with the
spatial distribution of the topic. The resulting joint distribution of all observed peak activation tokens,
word tokens, and latent parameters for each individual document in the GC-LDA model is as follows:
? (d)
? ? (d)
?
Nx
Nw
Y
Y
p(x, w, z, y, ?) = p(?|?)? ?
p(yi |?(d) )p(xi |?(yi ) )? ? ?
p(zj |y(d) , ?)p(wj |?(zj ) )? (1)
i=1
j=1
Note that when ? = 0, and the spatial distribution for each topic is specified as a single multivariate
Gaussian distribution, the model becomes equivalent to a smoothed version of the Correspondence
LDA model described by Blei & Jordan (2003) [2].2
2
We note that [2] uses a different generative description for how the zi variables are sampled conditional on
(d)
(d)
the yi indicator variables; in [2], zi is sampled uniformly from (1, ..., Ny ), and then wi is sampled from
(d)
the multinomial distribution of the topic yi that zi points to. This ends up being functionally equivalent to
the generative description for zi given here when ? = 0. Additionally, in [2], no prior is put on ?(t) , unlike in
GC-LDA. Therefore, when using GC-LDA with a single multivariate Gaussian and ? = 0, it is equivalent to a
smoothed version of Correspondence-LDA. Dirichlet priors have been demonstrated to be beneficial to model
performance [1], so including a prior on ?(t) in GC-LDA should have a positive impact.
3
?
?
?
(C)
(B)
(A)
y
z
w
?
x
NW
?
?
y
z
w
?
x
NW
NX
?
?
?1
? ?N
T
y
z
w
x
c
NW
NX
D
?
?
NX
D
?
?
?
?
T
D
?
?
?
?
R
?
?
T
Figure 1: (A) Graphical model for the Generalized Correspondence-LDA model, GC-LDA. (B)
Graphical model for GC-LDA with spatial distributions modeled as a single multivariate Gaussian
(equivalent to a smoothed version of Correspondence-LDA if ? = 0)2 . (C) Graphical model for
GC-LDA with subregions, with spatial distributions modeled as a mixture of multivariate Gaussians
A key aspect of this model is that it induces a correspondence between the number of activation
tokens and the number of word tokens within a document that will be assigned to the same topic. The
hyperparameter ? controls the strength of this correspondence. If ? = 0, then there is zero probability
that a word for document d will be sampled from topic t if no peak activations in d were sampled
from t. As ? becomes larger, this constraint is relaxed. Although intuitively one might want ? to be
zero in order to maximize the correspondence between the spatial and linguistic information, we have
found that setting ? > 0 leads to significantly better model performance. We conjecture that using a
non-zero ? allows the parameter space to be more efficiently explored during inference, and that it
improves the model?s ability to handle data sparsity and noise in high dimensional spaces, similar to
the role that the ? and ? hyperparameters serve in standard LDA [1].
2.2
Versions of GC-LDA Employed in Current Paper
There are multiple reasonable choices for the spatial distribution p(xi |?(yi ) ) in GC-LDA, depending
upon the application and the goals of the modeler. For the purposes of the current paper, we considered
three variants that are motivated by the target application. The first model shown in Figure 1.B
employs a single multivariate Gaussian distribution for each topic?s spatial distribution ? and is
therefore equivalent to a smoothed version of Correspondence-LDA if setting ? = 0. The generative
process for this model is the same as specified above, with generative step (b.ii) modified as follows:
Sample peak activation token xi from from a Gaussian distribution with parameters ?(yi ) and ? (yi ) .
We refer to this model as the ?no-subregions? model.
The second model and third model both employ Gaussian mixtures with R = 2 components for
each topic?s spatial distribution, and are shown in Figure 1.C. Employing a Gaussian mixture gives
the model more flexibility in terms of the types of spatial distributions that can be associated with
a topic. This is notably useful in modeling spatial distributions associated with neural activity, as
it allows the model to learn topics where a single cognitive function (captured by the linguistic
distribution) is associated with spatially discontiguous patterns of activations. In the second GC-LDA
model we present?which we refer to as the ?unconstrained subregions? model?the Gaussian
mixture components are unconstrained. In the third version of GC-LDA?which we refer to as the
?constrained subregions? model?the Gaussian components are constrained to have symmetric means
with respect to their distance from the origin along the horizontal spatial axis (a plane corresponding
to the longitudinal fissure in the brain). This constraint is consistent with results from meta-analyses
of the fMRI literature, where most studied functions display a high degree of bilateral symmetry
[6, 12].
The use of mixture models for representing the spatial distribution in GC-LDA requires the additional
parameters c, ?, and hyperparameter ?, as well as additional modifications to the description of
the generative process. Each topic?s spatial distribution in these models is now associated with a
multinomial probability distribution ? (t) giving the probability of sampling each component r from
(t)
each topic t, where ?r is the probability of sampling the rth component (which we will refer to as a
4
subregion) from the tth topic. Variable ci is an indicator variable that assigns each activation token
xi to a subregion r of the topic to which it is assigned via yi . A full description of the generative
process for these models is provided in Section 1 of the supplementary materials3 .
2.3
Inference for GC-LDA
Exact probabilistic inference for the GC-LDA model is intractable. We employed collapsed Gibbs
sampling for posterior inference ? collapsing out ?(d) , ?(t) , and ? (t) while sampling the indicator
variables yi , zi and ci . Spatial distribution parameters ?(t) are estimated via maximum likelihood.
The per-iteration computational complexity of inference is O(T (NW + NX R)), where T is the
number of topics, R is the number of subregions, and NW and NX are the total number of word
tokens and activation tokens in the corpus, respectively. Details of the inference methods and sampling
equations are provided in Section 2 of the supplement.
3
Experimental Evaluation
We refer to the three versions of GC-LDA described in Section 2 as (1) the ?no subregions? model,
for the model in which each topic?s spatial distribution is a single multivariate Gaussian distribution,
(2) the ?unconstrained subregions? model, for the model in which each topic?s spatial distribution is a
mixture of R = 2 unconstrained Gaussian distributions, and (3) the ?constrained subregions? model,
for the model in which each topic?s spatial distribution is a mixture of R = 2 Gaussian distributions
whose means are constrained to be symmetric along the horizontal spatial dimension with respect to
their distance from the origin.
Our empirical evaluations of the GC-LDA model are based on the application of these models to the
Neurosynth meta-analytic database [12]. We first illustrate and contrast the qualitative properties of
topics that are extracted by the three versions of GC-LDA4 . We then provide a quantitative model
comparison, in which the models are evaluated in terms of their ability to predict held out data. These
results highlight the promise of GC-LDA and this type of modeling for jointly extracting the spatial
extent and cognitive functions of neuroanatomical brain regions.
Neurosynth Database: Neurosynth [12] is a publicly available database consisting of data automatically extracted from a large collection of functional magnetic resonance imaging (fMRI) publications5 .
For each publication, the database contains the abstract text and all reported 3-dimensional peak
activation coordinates (in MNI space) in the study. The text was pre-processed to remove common
stop-words. For the version of the Neurosynth database employed in the current paper, there were
11,362 total publications, which had on average 35 peak activation tokens and 46 word tokens after
preprocessing (corresponding to approximately 400k activation and 520k word tokens in total).
3.1
Visualizing GC-LDA Topics
In Figure 2 we present several illustrative examples of topics for all three GC-LDA variants that we
considered. For each topic, we illustrate the topic?s distribution over word types via a word cloud,
(t)
where the sizes of words are proportional to their probabilities ?w in the model. Each topic?s spatial
distribution over neural activations is illustrated via a kernel-smoothed representation of all activation
tokens that were assigned to the topic, overlaid on an image of the brain. For the models that
represent spatial distributions using Gaussian mixtures (the unconstrained and constrained subregions
models), activations are color-coded based on which subregion they are assigned to, and the mixture
(t)
weights for the subregions ?r are depicted above the activation image on the left. In the constrained
subregions model (where the means of the two Gaussians were constrained to be symmetric along
the horizontal axis) the two subregions correspond to a ?left? and ?right? hemisphere subregion. The
following parameter settings were used for generating the images in Figure 2: T = 200, ? = .1,
? = .01, ? = .01, and for the models with subregions, ? = 1.0.
3
Note that these models are still instances of GC-LDA as presented in Figure 1.1; they can be equivalently
formulated by marginalizing out the ci variables, such that the probability f (xi |?(t) ) depends directly on the
parameters of each component, and the component probabilities given by ? (t) .
4
A brief discussion of the stability of topics extracted by GC-LDA is provided in Section 3 of the supplement
5
Additional details and Neurosynth data can be found at http://neurosynth.org/
5
Figure 2: Illustrative examples of topics extracted for the three GC-LDA variants. Probability
distributions over word types ?(t) are represented via word clouds, where word sizes are proportional
(t)
to ?w . Spatial distributions are illustrated using kernel-smoothed representations of all activation
tokens assigned to each topic. For the models with subregions, each activation token?s color (blue or
red) corresponds to the subregion r that the token is assigned to.
For nearly all of the topics shown in Figure 2, the spatial and linguistic distributions closely correspond
to functional regions that are extensively described in the literature (e.g., motor function in primary
motor cortex; face processing in the fusiform gyrus, etc.). We note that a key feature of all versions
of the GC-LDA model, relative to the majority of existing methods in the literature, is that the
model is able to capture the one-to-many mapping from neural regions onto cognitive functions.
For example, in all model variants, we observe topics corresponding to auditory processing and
language processing (e.g., the topics shown in panels B1 and B3 for the subregions model). While
these cognitive processes are distinct, they have partial overlap with respect to the brain networks
they recruit ? specifically, the superior temporal sulcus in the left hemisphere.
For functional regions that are relatively medial, the no-subregions model is able to capture bilateral
homologues by consolidating them into a single distribution (e.g., the topic shown in A2, which
spans the medial primary somatomotor cortex in both hemispheres). However, for functional regions
that are more laterally localized, the model cannot capture bilateral homologues using a single topic.
For cognitive processes that are highly lateralized (such as language processing, shown in A1, B1
6
and C1) this poses no concern. However, for functional regions that are laterally distant and do have
spatial symmetry, the model ends up distributing the functional region across multiple topics?see,
e.g., the topics shown in A3 and A4 in the no-subregions model, which correspond to the auditory
cortex in the left and right hemisphere respectively. Given that these two topics (and many other pairs
of topics that are not shown) correspond to a single cognitive function, it would be preferable if they
were represented using a single topic. This can potentially be achieved by increasing the flexibility
of the spatial representations associated with each topic, such that the model can capture functional
regions with distant lateral symmetry or other discontiguous spatial features using a single topic. This
motivates the unconstrained and constrained subregions models, in which topic?s spatial distributions
are represented by Gaussian mixtures.
In Figure 2, the topics in panels B3 and C3 illustrate how the subregions models are able to handle
symmetric functional regions that are located on the lateral surface of the brain. The lexical distribution for each of these individual topics in the subregions models is similar to that of both the
topics shown in A3 and A4 of the no-subregions model. However, the spatial distributions in B3 and
C3 each capture a summation of the two topics from the no subregions model. In the case of the
constrained subregion model, the symmetry between the means of the spatial distributions for the
subregions is enforced, while for the unconstrained model the symmetry is data-driven and falls out
of the model.
We note that while the unconstrained subregions model picks up spatial symmetry in a significant
subset of topics, it does not always do so. In the case of language processing (panel A1), the
lack of spatial symmetry is consistent with a large fMRI literature demonstrating that language
processing is highly left-lateralized [11]. And in fact, the two subregions in this topic correspond
approximately to Wernicke?s and Broca?s areas, which are integral to language comprehension and
production, respectively. In other cases, (e.g., the topics in panels B2 and B4), the unconstrained
subregions model partially captures spatial symmetry with a highly-weighted subregion near the
horizontal midpoint, but also has an additional low-weighted region that is lateralized. While this
result is not necessarily wrong per se, it is somewhat inelegant from a neurobiological standpoint.
Moreover, there are theoretical reasons to prefer a model in which subregions are always laterallysymmetrical. Specifically, in instances where the subregions are symmetric (the topic in panel B3
for the unconstrained subregions model and all topics for the constrained subregions model), the
subregion weights provide a measure of the relative lateralization of function. For example, the
language topic in panel C1 of the constrained subregions model illustrates that while there is neural
activation corresponding to linguistic processing in the right hemisphere of the brain, the function is
strongly left-lateralized (and vice-versa for face processing, illustrated in panel C2). By enforcing
(t)
the lateral symmetry in the constrained subregions model, the subregion weights ?r (illustrated
above the left activation images) for each topic inherently correspond to an automated measure of the
lateralization of the topic?s function. Thus, the constrained model produces what is, to our knowledge,
the first data-driven estimation of region-level functional hemispheric asymmetry across the whole
brain.
3.2
Predicting Held Out Data
This section describes quantitative comparisons between three GC-LDA models in terms of their
ability to predict held-out data. We split the Neurosynth dataset into a training and test set, where
approximately 20%j of all kdata in the corpus was put jinto thektest set. For each document, we
(d)
(d)
randomly removed .2Nx
peak activation tokens and .2Nw word tokens from each document.
We trained the models on the remaining data, and then for each model we computed the log-likelihood
of the test data, both for the word tokens and peak tokens.
The space of possible hyperparameters to explore in GC-LDA is vast, so we restrict our comparison
to the aspects of the model which are novel relative to the original Correspondence-LDA model.
Specifically, for all three modelvariants, we compared
the log-likelihood of the test data across
different values of ?, where ? ? 0, 0.001, 0.01, 0.1, 1 . We note again here that the no-subregions
model with ? = 0 is equivalent to a smoothed version of Correspondence-LDA [2] (see footnote 2
for additional clarification). The remainder of the parameters were fixed as follows (chosen based on
a combination of precedent from the topic modeling literature and preliminary model exploration):
T = 100, ? = .1, and ? = .01 for all models, and ? = 1.0 for the models with subregions. All
models were trained for 1000 iterations.
7
Figure 3 presents the held out log-likelihoods for all models across different settings of ?, in terms
of (i) the total log-likelihood for both activation tokens and word tokens (left) (ii) log-likelihood
for activation tokens only (middle), and (iii) log likelihood for word tokens only (right). For both
activation tokens and word tokens, for all three versions of GC-LDA, using a non-zero ? leads
to significant improvement in performance. In terms of predicting activation tokens alone, there
is a monotonic relationship between the size of ? and log-likelihood. This is unsurprising, since
increasing ? reduces the extent that word tokens constrain the spatial fit of the model. In terms of
predicting word tokens (and overall log-likelihood), the effect of ? shows an inverted-U function,
with the best performance in the range of .01 to .1. These patterns were consistent across all three
variants of GC-LDA. Taken together, our results suggest that using a non-zero ? results in a significant
improvement over the Correspondence-LDA model.
In terms of comparisons across model variants, we found that both subregions models were significant
improvements over the no-subregions models in terms of total log-likelihood, although the nosubregions model performed slightly better than the constrained subregions model at predicting word
tokens. In terms of the two subregions models, performance is overall fairly similar. Generally,
the constrained subregions model performs slightly better than the unconstrained model in terms
of predicting peak tokens, but slightly worse in terms of predicting word tokens. The differences
between the two subregions models in terms of total log-likelihood were negligible. These results do
not provide a strong statistical case for choosing one subregions model over the other; instead, they
suggest that the modeler ought to choose between models based on their respective theoretical or
qualitative properties (e.g., biological plausibility, as discussed in Section 3.1).
Activations only
Words only
Log-Likelihood
Activations + Words
Figure 3: Log Likelihoods of held out data for the three GC-LDA models as a function of model
parameter ?. Left: total log-likelihood (activation tokens + word tokens). Middle: log-likelihood of
activation tokens only. Right: log-likelihood of word tokens only.
4
Summary
We have presented generalized correspondence LDA (GC-LDA) ? a generalization of the
Correspondence-LDA model, with a focus on three variants that capture spatial properties motivated by neuroimaging applications. We illustrated how this model can be applied to a novel type of
metadata?namely, the spatial peak activation coordinates reported in fMRI publications?and how it
can be used to generate a relatively comprehensive atlas of functional brain regions. Our quantitative
comparisons demonstrate that the GC-LDA model outperforms the original Correspondence-LDA
model at predicting both missing word tokens and missing activation peak tokens. This improvement
was demonstrated in terms of both the introduction of the ? parameter, and with respect to alternative
parameterizations of topics? spatial distributions.
Beyond these quantitative results, our qualitative analysis demonstrates that the model can recover
interpretable topics corresponding closely to known functional regions of the brain. We also showed
that one variant of the model can recover known features regarding the hemispheric lateralization
of certain cognitive functions. These models show promise for the field of cognitive neuroscience,
both for summarizing existing results and for generating novel hypotheses. We also expect that novel
features of GC-LDA can be carried over to other extensions of Correspondence-LDA in the literature.
In future work, we plan to explore other spatial variants of these models that may better capture the
morphological features of distinct brain regions ? e.g., using hierarchical priors that can capture the
hierarchical organization of brain systems. We also hope to improve the model by incorporating
features such as the correlation between topics. Applications and extensions of our approach for
more standard image processing applications may also be a fruitful area of research.
8
References
[1] Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. On smoothing and inference for
topic models. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence,
pages 27?34. AUAI Press, 2009.
[2] David M Blei and Michael I Jordan. Modeling annotated data. In Proceedings of the 26th annual
international ACM SIGIR conference on Research and development in informaion retrieval, pages 127?134.
ACM, 2003.
[3] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine
Learning research, 3:993?1022, 2003.
[4] Vince D Calhoun, Jingyu Liu, and T?lay Adal?. A review of group ica for fmri data and ica for joint
inference of imaging, genetic, and erp data. Neuroimage, 45(1):S163?S172, 2009.
[5] Jeremy R Manning, Rajesh Ranganath, Kenneth A Norman, and David M Blei. Topographic factor analysis:
a bayesian model for inferring brain networks from neural data. PloS one, 9(5):e94914, 2014.
[6] Adrian M Owen, Kathryn M McMillan, Angela R Laird, and Ed Bullmore. N-back working memory
paradigm: A meta-analysis of normative functional neuroimaging studies. Human brain mapping, 25(1):46?
59, 2005.
[7] Russell A Poldrack, Jeanette A Mumford, Tom Schonberg, Donald Kalar, Bishal Barman, and Tal Yarkoni.
Discovering relations between mind, brain, and mental disorders using topic mapping. PLoS Comput Biol,
8(10):e1002707, 2012.
[8] Stephen M Smith, Peter T Fox, Karla L Miller, David C Glahn, P Mickle Fox, Clare E Mackay, Nicola
Filippini, Kate E Watkins, Roberto Toro, Angela R Laird, et al. Correspondence of the brain?s functional
architecture during activation and rest. Proceedings of the National Academy of Sciences, 106(31):13040?
13045, 2009.
[9] Bertrand Thirion, Ga?l Varoquaux, Elvis Dohmatob, and Jean-Baptiste Poline. Which fmri clustering gives
good brain parcellations? Frontiers in neuroscience, 8(167):13, 2014.
[10] David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil,
WU-Minn HCP Consortium, et al. The wu-minn human connectome project: an overview. Neuroimage,
80:62?79, 2013.
[11] Mathieu Vigneau, Virginie Beaucousin, Pierre-Yves Herve, Hugues Duffau, Fabrice Crivello, Olivier
Houde, Bernard Mazoyer, and Nathalie Tzourio-Mazoyer. Meta-analyzing left hemisphere language areas:
phonology, semantics, and sentence processing. Neuroimage, 30(4):1414?1432, 2006.
[12] Tal Yarkoni, Russell A Poldrack, Thomas E Nichols, David C Van Essen, and Tor D Wager. Large-scale
automated synthesis of human functional neuroimaging data. Nature methods, 8(8):665?670, 2011.
[13] BT Thomas Yeo, Fenna M Krienen, Simon B Eickhoff, Siti N Yaakub, Peter T Fox, Randy L Buckner,
Christopher L Asplund, and Michael WL Chee. Functional specialization and flexibility in human
association cortex. Cerebral Cortex, page bhu217, 2014.
9
| 6274 |@word proceeded:1 fusiform:1 middle:2 version:12 loading:1 adrian:1 decomposition:1 fabrice:1 pick:1 reduction:1 liu:1 contains:1 genetic:1 document:18 longitudinal:1 outperforms:1 existing:2 current:4 nt:1 activation:47 assigning:3 distant:2 analytic:3 motor:2 remove:1 atlas:4 interpretable:1 medial:2 toro:1 alone:1 generative:10 intelligence:1 discovering:1 plane:1 adal:1 ith:1 smith:2 blei:4 mental:1 characterization:1 provides:1 parameterizations:1 location:1 org:1 along:4 c2:1 qualitative:4 discontiguous:2 notably:1 ica:4 brain:30 bertrand:1 automatically:1 encouraging:1 increasing:2 becomes:2 project:2 provided:5 moreover:2 spain:1 notation:3 panel:7 hugues:1 what:1 recruit:1 indiana:1 ought:1 temporal:1 quantitative:4 auai:1 laterally:2 preferable:1 milestone:1 wrong:1 demonstrates:1 control:1 unit:1 neurosynth:10 positive:1 negligible:1 analyzing:1 yd:5 approximately:3 might:1 studied:1 equivalence:1 range:1 enforces:1 area:3 empirical:1 significantly:1 word:38 pre:1 donald:1 suggest:2 consortium:1 onto:5 cannot:1 ga:1 put:3 context:2 applying:1 collapsed:1 yee:1 equivalent:6 fruitful:1 brainmap:2 missing:4 demonstrated:2 lexical:1 oluwasanmi:1 sigir:1 identifying:2 assigns:1 disorder:1 insight:1 jingyu:1 inelegant:1 stability:1 handle:2 coordinate:3 target:2 play:1 behrens:1 exact:1 smyth:1 olivier:1 us:1 kathryn:1 hypothesis:1 origin:2 associate:2 located:2 lay:1 database:9 observed:1 role:2 cloud:2 capture:9 region:29 wj:1 morphological:1 plo:2 russell:2 removed:1 valuable:1 complexity:1 constrains:1 ideally:1 trained:2 segment:1 creation:1 serve:1 upon:1 homologues:2 joint:2 represented:3 univ:1 distinct:2 describe:2 artificial:1 choosing:1 whose:1 jean:1 larger:1 valued:1 supplementary:1 calhoun:1 ability:7 bullmore:1 topographic:2 jointly:5 laird:2 jeanette:1 essa:1 propose:2 remainder:2 flexibility:5 karla:1 academy:1 description:4 cluster:1 asymmetry:1 produce:1 generating:2 houde:1 leave:1 illustrate:4 derive:1 depending:1 pose:1 andrew:1 strong:1 subregion:17 drawback:1 closely:2 annotated:1 exploration:1 human:7 violates:1 generalization:3 preliminary:1 varoquaux:1 biological:1 summation:1 comprehension:1 extension:3 strictly:1 frontier:1 considered:2 vince:1 overlaid:1 mapping:7 cognition:2 predict:3 nw:11 major:1 tor:1 a2:1 purpose:2 estimation:1 ntd:2 siti:1 wl:1 vice:2 create:1 weighted:2 hope:1 gaussian:16 always:2 modified:1 ej:1 broader:1 publication:5 linguistic:7 focus:1 improvement:5 likelihood:16 contrast:1 lateralized:4 summarizing:1 buckner:1 inference:8 typically:1 bt:1 relation:1 semantics:1 overall:2 priori:2 wernicke:1 development:1 resonance:1 spatial:53 special:1 constrained:15 fairly:1 plan:1 field:1 smoothing:1 mackay:1 ng:1 atom:1 sampling:5 jones:1 unsupervised:1 nearly:1 fmri:9 future:1 employ:2 randomly:1 national:1 comprehensive:1 individual:2 consisting:3 organization:1 highly:3 essen:2 evaluation:2 mixture:10 held:5 wager:1 n2d:1 rajesh:1 integral:1 partial:1 arthur:1 respective:1 herve:1 fox:3 theoretical:2 increased:1 instance:3 modeling:5 assignment:1 subset:1 consolidating:1 comprised:1 unsurprising:1 reported:3 peak:16 international:1 probabilistic:1 connectome:2 michael:4 together:1 synthesis:1 w1:1 again:1 padhraic:1 choose:1 collapsing:1 worse:1 cognitive:19 yeo:2 clare:1 jeremy:1 b2:1 kate:1 depends:1 performed:2 later:1 bilateral:3 unigrams:1 red:1 recover:2 asuncion:1 yarkoni:3 simon:1 yves:1 publicly:1 efficiently:1 miller:1 correspond:6 identify:3 bayesian:3 raw:1 footnote:1 ed:1 dohmatob:1 associated:7 modeler:2 sampled:5 stop:1 dataset:2 auditory:2 knowledge:3 color:2 dimensionality:1 improves:1 back:1 tom:1 specify:1 evaluated:1 strongly:1 furthermore:2 stage:1 correlation:1 working:1 horizontal:4 christopher:1 lack:1 quality:1 lda:75 b3:4 effect:2 nichols:1 norman:1 assigned:8 lateralization:4 spatially:1 symmetric:5 illustrated:5 parcel:1 visualizing:1 during:2 illustrative:2 hemispheric:2 generalized:7 whye:1 demonstrate:2 performs:1 image:8 meaning:1 novel:7 common:2 superior:1 functional:28 multinomial:10 poldrack:3 overview:1 b4:1 cerebral:1 discussed:1 association:1 resting:1 functionally:1 rth:1 significant:6 refer:6 versa:2 gibbs:1 unconstrained:11 illinois:1 language:7 had:1 specification:1 similarity:1 cortex:5 surface:1 etc:1 multivariate:6 posterior:1 recent:1 showed:1 hemisphere:7 driven:3 randy:1 certain:1 meta:7 yi:13 inverted:1 captured:5 additional:5 relaxed:1 somewhat:1 employed:3 broca:1 maximize:1 paradigm:1 ii:5 stephen:2 multiple:3 full:1 reduces:1 champaign:1 segmented:1 characterized:1 plausibility:1 retrieval:1 baptiste:1 coded:1 a1:2 controlled:1 impact:1 prediction:1 variant:13 iteration:2 kernel:2 represent:1 achieved:1 c1:2 want:1 koyejo:1 source:1 macroscopic:1 standpoint:1 w2:1 rest:1 unlike:1 induced:1 chee:1 incorporates:1 jordan:3 extracting:1 near:1 iii:2 split:1 automated:3 variety:1 fit:1 zi:8 architecture:1 restrict:1 regarding:2 texas:1 motivated:3 specialization:1 ugurbil:1 distributing:1 effort:1 peter:2 useful:1 generally:1 se:1 involve:1 extensively:1 subregions:44 induces:1 processed:1 category:1 ken:1 tth:1 http:1 gyrus:1 generate:1 zj:2 neuroscience:3 estimated:2 per:2 vigneau:1 anatomical:1 blue:1 hyperparameter:4 promise:2 group:1 key:2 demonstrating:1 mazoyer:2 sulcus:1 erp:1 kenneth:1 imaging:3 vast:1 year:1 enforced:1 uncertainty:1 throughout:3 reasonable:1 wu:2 eickhoff:1 putative:1 prefer:1 display:1 correspondence:30 n1d:1 annual:1 activity:3 mni:1 strength:2 constraint:2 constrain:2 x2:1 tal:3 aspect:3 span:1 relatively:3 conjecture:1 combination:1 manning:2 describes:3 beneficial:1 increasingly:1 across:6 slightly:3 wi:5 modification:1 intuitively:2 taken:1 equation:1 thirion:1 mind:1 end:3 available:1 gaussians:2 apply:2 observe:1 hierarchical:2 magnetic:1 pierre:1 alternative:2 original:2 neuroanatomical:1 angela:2 dirichlet:5 clustering:2 remaining:1 thomas:2 graphical:3 a4:2 placeholder:1 phonology:1 parcellation:2 giving:1 nicola:1 mumford:1 primary:3 distance:2 separate:1 mapped:1 lateral:3 majority:1 nx:15 topic:90 extent:4 reason:1 enforcing:1 modeled:2 relationship:2 minn:2 equivalently:1 neuroimaging:8 potentially:1 motivates:1 twenty:1 teh:1 crivello:1 datasets:1 urbana:1 supporting:1 gc:48 smoothed:7 mcmillan:1 david:6 pair:1 namely:1 specified:2 c3:2 sentence:1 textual:1 barcelona:1 nip:1 informaion:1 able:3 beyond:1 deanna:1 below:1 pattern:2 sparsity:1 including:1 max:1 memory:1 overlap:1 homologous:1 predicting:7 indicator:9 representing:1 improve:2 barman:1 brief:1 mathieu:1 axis:2 carried:1 extract:2 metadata:1 roberto:1 text:6 prior:6 literature:7 review:1 marginalizing:1 relative:4 precedent:1 fully:1 expect:1 highlight:1 allocation:2 proportional:2 localized:1 degree:1 offered:1 consistent:4 nathalie:1 rubin:1 production:1 austin:1 poline:1 summary:2 token:56 allow:2 fall:1 face:2 midpoint:1 fifth:1 van:2 boundary:1 dimension:1 gram:1 author:1 collection:1 projected:1 preprocessing:1 voxel:1 employing:1 welling:1 correlate:1 ranganath:1 neurobiological:1 corpus:7 b1:2 xi:12 latent:3 table:3 additionally:2 nature:3 learn:1 inherently:1 symmetry:9 necessarily:1 kamil:1 whole:1 noise:1 hyperparameters:3 body:1 x1:1 ny:1 neuroimage:3 inferring:1 comput:1 watkins:1 third:2 specific:1 normative:1 explored:1 concern:1 a3:2 intractable:1 incorporating:1 barch:1 ci:4 supplement:2 illustrates:1 depicted:2 timothy:2 explore:2 yacoub:1 hcp:1 partially:1 monotonic:1 corresponds:5 extracted:7 acm:2 conditional:1 goal:3 viewed:1 formulated:1 towards:1 owen:1 hard:1 tzourio:1 specifically:3 uniformly:1 total:8 clarification:1 elvis:1 bernard:1 experimental:1 support:1 incorporate:2 biol:1 |
5,830 | 6,275 | Ladder Variational Autoencoders
Casper Kaae S?nderby?
[email protected]
Tapani Raiko?
[email protected]
S?ren Kaae S?nderby?
[email protected]
Lars Maal?e?
[email protected]
Ole Winther?,?
[email protected]
Abstract
Variational autoencoders are powerful models for unsupervised learning. However
deep models with several layers of dependent stochastic variables are difficult to
train which limits the improvements obtained using these highly expressive models.
We propose a new inference model, the Ladder Variational Autoencoder, that
recursively corrects the generative distribution by a data dependent approximate
likelihood in a process resembling the recently proposed Ladder Network. We
show that this model provides state of the art predictive log-likelihood and tighter
log-likelihood lower bound compared to the purely bottom-up inference in layered
Variational Autoencoders and other generative models. We provide a detailed
analysis of the learned hierarchical latent representation and show that our new
inference model is qualitatively different and utilizes a deeper more distributed
hierarchy of latent variables. Finally, we observe that batch-normalization and
deterministic warm-up (gradually turning on the KL-term) are crucial for training
variational models with many stochastic layers.
1 Introduction
The recently introduced variational autoencoder (VAE) [10, 19] provides a framework for deep
generative models. In this work we study how the variational inference in such models can be
improved while not changing the generative model. We introduce a new inference model using
the same top-down dependency structure in both the inference and generative models achieving
state-of-the-art generative performance.
VAEs, consisting of hierarchies of conditional stochastic variables, are highly expressive models
retaining the computational efficiency of fully factorized models, Figure 1 a). Although highly
flexible these models are difficult to optimize for deep hierarchies due to multiple layers of conditional
stochastic layers. The VAEs considered here are trained by optimizing a variational approximate
posterior lower bounding the intractable true posterior. Recently used inference are calculated
purely bottom-up with no interaction between the inference and generative models [10, 18, 19]. We
propose a new structured inference model using the same top-down dependency structure in both
the inference and generative models. Here the approximate posterior distribution can be viewed
as merging information from a bottom up computed approximate likelihood term with top-down
prior information from the generative distribution, see Figure 1 b). The sharing of information (and
parameters) with the generative model gives the inference model knowledge of the current state of the
generative model in each layer. The top down-pass then recursively corrects the generative distribution
with a data dependent approximating the log-likelihood using a simple precision-weighted addition.
?
Bioinformatics Centre, Department of Biology, University of Copenhagen, Denmark
Department of Computer Science, Aalto University, Finland
?
Department of Applied Mathematics and Computer Science, Technical University of Denmark
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a)
z2
b)
z2
d2
z2
z2
shared
z1
z1
d1
x
x
x
z1
z1
x
Figure 1: Inference (or encoder/recognition) and
generative (or decoder) models for a) VAE and
b) LVAE. Circles are stochastic variables and diamonds are deterministic variables.
Figure 2: MNIST train (full lines) and test
(dashed lines) set log-likelihood using one importance sample during training. The LVAE improves performance significantly over the regular
VAE.
This parameterization allows interactions between the bottom-up and top-down signals resembling
the recently proposed Ladder Network [22, 17], and we therefore denote it Ladder-VAE (LVAE). For
the remainder of this paper we will refer to VAEs as both the inference and generative model seen in
Figure 1 a) and similarly LVAE as both the inference and generative model in Figure 1 b). We stress
that the VAE and LVAE have identical generative models and only differ in the inference models.
Previous work on VAEs have been restricted to shallow models with one or two layers of stochastic
latent variables. The performance of such models are constrained by the restrictive mean field
approximation to the intractable posterior distribution. Here we found that purely bottom-up inference
optimized with gradient ascent are only to a limited degree able to utilize more than two layers of
stochastic latent variables. We initially show that a warm-up period [2, 16, Section 6.2] to support
stochastic units staying active in early training and batch-normalization (BN) [7] can significantly
improve performance of VAEs. Using these VAE models as competitive baselines we show that
LVAE improves the generative performance achieving as good or better performance than other (often
complicated) methods for creating flexible variational distributions such as: The Variational Gaussian
Processes [21], Normalizing Flows [18], Importance Weighted Autoencoders [3] or Auxiliary Deep
Generative Models[13]. Compared to the bottom-up inference in VAEs we find that LVAE: 1) have
better generative performance 2) provides a tighter bound on the true log-likelihood and 3) can utilize
deeper and more distributed hierarchies of stochastic variables. Lastly we study the learned latent
representations and find that these differ qualitatively between the LVAE and VAE with the LVAE
capturing more high level structure in the datasets. In summary our contributions are:
? A new inference model combining a Gaussian term, akin to an approximate Gaussian
likelihood, with the generative model resulting in better generative performance than the
normally used bottom-up VAE inference.
? We provide a detailed study of the learned latent distributions and show that LVAE learns
both a deeper and more distributed representation when compared to VAE.
? We show that a deterministic warm-up period and batch-normalization are important for
training deep stochastic models.
2 Methods
VAEs and LVAEs simultaneously train a generative model p? (x, z) = p? (x|z)p? (z) for data x using
latent variables z, and
R an inference model q (z|x) by optimizing a variational lower bound to the
likelihood p? (x) = p? (x, z)dz. In the generative model p? , the latent variables z are split into L
layers zi , i = 1 . . . L, and each stochastic layer is a fully factorized Gaussian distribution conditioned
2
on the layer above:
p? (z) = p? (zL )
L
Y1
i=1
(1)
p? (zi |zi+1 )
p? (zi |zi+1 ) = N zi |?p,i (zi+1 ),
p? (x|z1 ) = N x|?p,0 (z1 ),
2
p,i (zi+1 )
2
p,0 (z1 )
, p? (zL ) = N (zL |0, I)
or P? (x|z1 ) = B (x|?p,0 (z1 ))
(2)
(3)
where the observation model is matching either continuous-valued (Gaussian N ) or binary-valued
(Bernoulli B) data, respectively. We use subscript p (and q) to highlight if ? or 2 belongs to the
generative or inference distributions respectively. Note that while individual conditional distributions
are fully factorized, the hierarchical specification allows the lower layers of the latent variables to be
highly correlated. The variational principle provides a tractable lower bound on the log likelihood
which can be used as a training criterion L.
?
p? (x, z)
log p(x) Eq (z|x) log
= L(?, ; x)
(4)
q (z|x)
= KL(q (z|x)||p? (z)) + Eq (z|x) [log p? (x|z)] ,
(5)
where KL is the Kullback-Leibler divergence. A strictly tighter bound on the likelihood may be
obtained at the expense of a K-fold increase of samples by using the importance weighted bound [3]:
"
#
K
X
p? (x, z(k) )
log p(x) Eq (z(1) |x) . . . Eq (z(K) |x) log
= LK (?, ; x) L(?, ; x) . (6)
q (z(k) |x)
k=1
The generative and inference parameters, ? and , are jointly trained by optimizing Eq. (5) using
stochastic gradient descent where we use the reparametrization trick for stochastic backpropagation
through the Gaussian latent variables [10, 19]. The KL[q |p? ] is calculated analytically at each layer
when possible and otherwise approximated using Monte Carlo sampling.
2.1
Variational autoencoder inference model
VAE inference models are parameterized as a bottom-up process similar to [3, 9]. Conditioned on the
stochastic layer below each stochastic layer is specified as a fully factorized Gaussian distribution:
q (z|x) = q (z1 |x)
L
Y
i=2
q (zi |zi
2
q,1 (x)
2
zi |?q,i (zi 1 ), q,i
(zi 1 )
q (z1 |x) = N z1 |?q,1 (x),
q (zi |zi
1)
=N
(7)
1)
(8)
, i = 2 . . . L.
(9)
In this parameterization the inference and generative distributions are computed separately with no
explicit sharing of information. In the beginning of the training procedure this might cause problems
since the inference models have to approximately match the highly variable generative distribution in
order to optimize the likelihood. The functions ?(?) and 2 (?) in the generative and VAE inference
models are implemented as:
d(y) =MLP(y)
?(y) =Linear(d(y))
(10)
(11)
2
(12)
(y) =Softplus(Linear(d(y))) ,
where MLP is a two layered multilayer perceptron network, Linear is a single linear layer, and
Softplus applies log(1 + exp(?)) nonlinearity to each component of its argument vector ensuring
positive variances. In our notation, each MLP(?) or Linear(?) gives a new mapping with its own
parameters, so the deterministic variable d is used to mark that the MLP-part is shared between ? and
2
whereas the last Linear layer is not shared.
2.2
Ladder variational autoencoder inference model
We propose a new inference model that recursively corrects the generative distribution with a data
dependent approximate likelihood term. First a deterministic upward pass computes the Gaussian
3
Figure 3: MNIST log-likelihood values for VAEs and the LVAE model with different number of latent
layers, Batch-normalization (BN) and Warm-up (WU). a) Train log-likelihood, b) test log-likelihood
and c) test log-likelihood with 5000 importance samples.
likelihood like contributions:
dn =MLP(dn 1 )
?
?q,i =Linear(di ), i = 1 . . . L
(13)
(14)
2
?q,i
=Softplus(Linear(di )), i = 1 . . . L
(15)
where d0 = x. This is followed by a stochastic downward pass recursively computing both the
approximate posterior and generative distributions:
q (z|x) =q (zL |x)
q,i
=
?q,i =
?q,i2
1
+
L
Y1
i=1
q (zi |zi+1 , x)
(17)
2
p,i
?
?q,i ?q,i2 + ?p,i
?q,i2
+
(16)
2
p,i
(18)
2
p,i
q (zi |?) = N zi |?q,i ,
2
q,i
,
(19)
2
2
where ?q,L = ?
?q,L and q,L
= ?q,L
. The inference model is a precision-weighted combination of
2
?
?q and ?q carrying bottom-up information and ?p and p2 from the generative distribution carrying
top-down prior information. This parameterization has a probabilistic motivation by viewing ?
?q
and ?q2 as an approximate Gaussian likelihood that is combined with a Gaussian prior ?p and p2
from the generative distribution. Together these form the approximate posterior distribution q? (z|x)
using the same top-down dependency structure both in the inference and generative model. A line of
motivation, already noted in [4], see [1] for a recent approach, is that a purely bottom-up inference
process as in i.e. VAEs does not correspond well with real perception, where iterative interaction
between bottom-up and top-down signals produces the final activity of a unit4 . Notably it is difficult
for the purely bottom-up inference networks to model the explaining away phenomenon, see [23,
Chapter 5] for a recent discussion on this phenomenon. The LVAE model provides a framework with
the wanted interaction, while not increasing the number of parameters.
2.3
Warm-up from deterministic to variational autoencoder
The variational training criterion in Eq. (5) contains the reconstruction term p? (x|z) and the variational
regularization term. The variational regularization term causes some of the latent units to become
uninformative during training [14] because the approximate posterior for unit k, q(zi,k | . . . ) is
regularized towards its own prior p(zi,k | . . . ), a phenomenon also recognized in the VAE setting
[3, 2]. This can be seen as a virtue of automatic relevance determination, but also as a problem when
many units collapse early in training before they learned a useful representation. We observed that
such units remain uninformative for the rest of the training, presumably trapped in a local minima or
saddle point at KL(qi,k |pi,k ) ? 0, with the optimization algorithm unable to re-activate them.
4
The idea was dismissed at the time, since it could introduce substantial theoretical complications.
4
We alleviate the problem by initializing training using the reconstruction error only (corresponding
to training a standard deterministic auto-encoder), and then gradually introducing the variational
regularization term:
L(?, ; x)W U =
KL(q (z|x)||p? (z)) + Eq
(z|x)
[log p? (x|z)] ,
(20)
where is increased linearly from 0 to 1 during the first Nt epochs of training. We denote this
scheme warm-up (abbreviated WU in tables and graphs) because the objective goes from having a
delta-function solution (corresponding to zero temperature) and then move towards the fully stochastic
variational objective. This idea have previously been considered in [16, Section 6.2] and more recently
in [2].
3 Experiments
To test our models we use the standard benchmark datasets MNIST, OMNIGLOT [11] and NORB
[12]. The largest models trained used a hierarchy of five layers of stochastic latent variables of sizes
64, 32, 16, 8 and 4, going from bottom to top. We implemented all mappings using MLP?s with two
layers of deterministic hidden units. In all models the MLP?s between x and z1 or d1 were of size 512.
Subsequent layers were connected by MLP?s of sizes 256, 128, 64 and 32 for all connections in both
the VAE and LVAE. Shallower models were created by removing latent variables from the top of the
hierarchy. We sometimes refer to the five layer models as 64-32-16-8-4, the four layer models as
64-32-16-8 and so fourth. The models were trained end-to-end using the Adam [8] optimizer with a
mini-batch size of 256. We report the train and test log-likelihood lower bounds, Eq. (5) as well as
the approximated true log-likelihood calculated using 5000 importance weighted samples, Eq. (6).
The models were implemented using the Theano [20], Lasagne [5] and Parmesan5 frameworks. The
source code is available at github6
For MNIST, we used a sigmoid output layer to predict the mean of a Bernoulli observation model
and leaky rectifiers (max(x, 0.1x)) as nonlinearities in the MLP?s. The models were trained for
2000 epochs with a learning rate of 0.001 on the complete training set. Models using warm-up used
Nt = 200. Similarly to [3], we resample the binarized training values from the real-valued images
using a Bernoulli distribution after each epoch which prevents the models from over-fitting. Some of
the models were fine-tuned by continuing training for 2000 epochs while multiplying the learning rate
with 0.75 after every 200 epochs and increase the number of Monte Carlo and importance weighted
samples to 10 to reduce the variance in the approximation of the expectations in Eq. (4) and improve
the inference model, respectively.
Models trained on the OMNIGLOT dataset7 , consisting of 28x28 binary images images were trained
similar to above except that the number of training epochs was 1500.
Models trained on the NORB dataset8 , consisting of 32x32 grays-scale images with color-coding
rescaled to [0, 1], used a Gaussian observation model with mean and variance predicted using a linear
and a softplus output layer respectively. The settings were similar to the models above except that
hyperbolic tangent was used as nonlinearities in the MLP?s and the number of training epochs was
2000.
3.1
Generative log-likelihood performance
In Figure 3 we show the train and test set log-likelihood on the MNIST dataset for a series of different
models with varying number of stochastic layers.
Consider the Ltest
1 , Figure 3 b), the VAE without batch-normalization and warm-up does not improve
for additional stochastic layers beyond one whereas VAEs with batch-normalization and warm-up
improve performance up to three layers. The LVAE models performs better improving performance
for each additional layer reaching Ltest
= 85.23 with five layers which is significantly higher than
1
the best VAE score at 87.49 using three layers. As expected the improvement in performance is
5
https://github.com/casperkaae/parmesan
https://github.com/casperkaae/LVAE
7
The
OMNIGLOT
data
was
partitioned
and
preprocessed
as
in
https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT
8
The NORB dataset was downloaded in resized format from github.com/gwtaylor/convnet_matlab
6
5
[3],
Figure 4: log KL(q|p) for each latent unit is shown at different training epochs. Low KL (white)
corresponds to an uninformative unit. The units are sorted for visualization. It is clear that vanilla
VAE cannot train the higher latent layers, while introducing batch-normalization helps. Warm-up
creates more active units early in training, some of which are then gradually pruned away during
training, resulting in a more distributed final representation. Lastly, we see that the LVAE activates
the highest number of units in each layer.
VAE 1-layer + NF [18]
IWAE, 2-layer + IW=1 [3]
IWAE, 2-layer + IW=50 [3]
VAE, 2-layer + VGP [21]
LVAE, 5-layer
LVAE, 5-layer + finetuning
LVAE, 5-layer + finetuning + IW=10
? log p((x))
-85.10
-85.33
-82.90
-81.90
-82.12
-81.84
-81.74
Table 1: Test set MNIST performance for importance weighted autoencoder (IWAE), VAE with
normalizing flows (NF) and VAE with variational Gaussian process (VGP). Number of importance
weighted (IW) samples used for training is one unless otherwise stated.
decreasing for each additional layer, but we emphasize that the improvements are consistent even
for the addition of the top-most layers. We found batch-normalization improved performance for all
models, however especially for LVAE we found batch-normalization to be important. In Figure 3
c) the approximated true log-likelihood estimated using 5000 importance weighted samples is seen.
Again the LVAE models performs better than the VAE reaching Ltest
82.12 compared to the best
5000 =
VAE at 82.74. These results show that the LVAE achieves both a higher approximate log-likelihood
score, but also a significantly tighter lower bound on the log-likelihood Ltest
1 . The models in Figure 3
were trained using fixed learning rate and one Monte Carlo and importance weighted sample. To
improve performance we fine-tuned the best performing five layer LVAE models by training these for
a further 2000 epochs with annealed learning rate and increasing the number of IW samples and see a
slight improvements in the test set log-likelihood values, Table 1. We saw no signs of over-fitting for
any of our models even though the hierarchical latent representations are highly expressive as seen in
Figure 2.
Comparing the results obtained here with current state-of-the art results on permutation invariant
MNIST, Table 1, we see that the LVAE performs better than the normalizing flow VAE and importance
weighted VAE and comparable to the Variational Gaussian Process VAE. However we note that these
results are not directly comparable to these due to differences in the training procedure.
To test the models on more challenging data we used the OMNIGLOT dataset, consisting of characters
from 50 different alphabets with 20 samples of each character. The log-likelihood values, Table 2,
shows similar trends as for MNIST with the LVAE achieving the best performance using five layers
6
VAE
VAE
+BN
OMNIGLOT
64
64-32
64-32-16
64-32-16-8
64-32-16-8-4
VAE
+BN
+WU
111.21
110.58
111.26
111.58
110.46
105.62
105.51
106.09
105.66
105.45
104.51
102.61
102.52
102.66
102.48
NORB
64
64-32
64-32-16
64-32-16-8
64-32-16-8-4
2741
2792
2786
2689
2654
3198
3224
3235
3201
3198
3338
3483
3492
3482
3422
LVAE
+BN
+WU
102.63
102.18
102.21
-102.11
3272
3519
3449
3455
Table 2: Test set log-likelihood scores for models trained on the OMNIGLOT and NORB datasets.
The left most column show dataset and the number of latent variables i each model.
of latent variables, see the appendix for further results. The best log-likelihood results obtained here,
102.11, is higher than the best results from [3] at 103.38, which were obtained using more latent
variables (100-50 vs 64-32-16-8-4) and further using 50 importance weighted samples for training.
We tested the models using a continuous Gaussian observation model on the NORB dataset consisting
of gray-scale images of 5 different toy objects under different illuminations and observation angles.
The LVAE achieves a slightly higher score than the VAE, however none of the models see an increase
in performance for more using more than three stochastic layers. We found the Gaussian observation
models to be harder to optimize compared to the Bernoulli models, a finding also recognized in [24],
which might explain the lower utilization of the topmost latent layers in these models.
3.2
Latent representations
The probabilistic generative models studied here automatically tune the model complexity to the data
by reducing the effective dimension of the latent representation due to the regularization effect of the
priors in Eq. (4). However, as previously identified [16, 3], the latent representation is often overly
sparse with few stochastic latent variables propagating useful information.
To study the importance of individual units, we split the variational training criterion L into a sum
of terms corresponding to each unit k in each layer i. For stochastic latent units, this is the KLdivergence between q(zi,k |?) and p(zi,k |zi+1 ). Figure 4 shows the evolution of these terms during
training. This term is zero if the inference model is collapsed onto the prior carrying no information
about the data, making the unit uninformative. For the models without warm-up we find that the
KL-divergence for each unit is stable during all training epochs with only very few new units activated
during training. For the models trained with warm-up we initially see many active units which are
then gradually pruned away as the variational regularization term is introduced. At the end of training
warm-up results in more active units indicating a more distributed representation and further that the
LVAE model produces both the deepest and most distributed latent representation.
We also study the importance of layers by splitting the training criterion layer-wise as seen in Figure 5.
This measures how much of the representation work (or innovation) is done in each layer. The VAEs
use the lower layers the most whereas the highest layers are not (or only to a limited degree) used.
Contrary to this, the LVAE puts much more importance to the higher layers which shows that it learns
both a deeper and qualitatively different hierarchical latent representation which might explain the
better performance of the model. To qualitatively study the learned representations, PCA plots of
zi ? q(zi |?) are seen in Figure 6. For vanilla VAE, the latent representations above the second layer
are completely collapsed on a standard normal prior. Including Batch-normalization and warm-up
activates one additional layer each in the VAE. The LVAE utilizes all five latent layers and the latent
representation shows progressively more clustering according to class, which is clearly seen in the
7
Figure 5: Layer-wise KL[q|p] divergence going Figure 6: PCA-plots of samples from q(zi |zi 1 )
from the lowest to the highest layers. In the VAE for 5-layer VAE and LVAE models trained on
models the KL divergence is highest in the lowest MNIST. Color-coded according to true class label
layers whereas it is more distributed in the LVAE
model
topmost layer of this model. These findings indicate that the LVAE produce a structured high-level
latent representations that are likely useful for semi-supervised learning.
4 Conclusion and Discussion
We presented a new inference model for VAEs combining a bottom-up data-dependent approximate
likelihood term with prior information from the generative distribution. We showed that this parameterization 1) increases the approximated log-likelihood compared to VAEs, 2) provides a tighter
bound on the log-likelihood and 3) learns a deeper and qualitatively different latent representation of
the data. Secondly we showed that deterministic warm-up and batch-normalization are important for
optimizing deep VAEs and LVAEs. Especially the large benefits in generative performance and depth
of learned hierarchical representations using batch-normalization were surprising given the additional
noise introduced. This is something that is not fully understood and deserves further investigation
and although batch-normalization is not novel we believe that this finding in the context of VAEs are
important.
The inference in LVAE is computed recursively by correcting the generative distribution with a
data-dependent approximate likelihood contribution. Compared to purely bottom-up inference,
this parameterization makes the optimization easier since the inference is simply correcting the
generative distribution instead of fitting the two models separately. We believe this explicit parameter
sharing between the inference and generative distribution can generally be beneficial in other types
of recursive variational distributions such as DRAW [6] where the ideas presented here are directly
applicable. Further the LVAE is orthogonal to other methods for improving the inference distribution
such as Normalizing flows [18], Variational Gaussian Process [21] or Auxiliary Deep generative
models [13] and combining with these might provide further improvements.
Other directions for future work include extending these models to semi-supervised learning which
will likely benefit form the learned deep structured hierarchies of latent variables and studying more
elaborate inference schemes such as a k-step iterative inference in the LVAE [15].
References
[1] J. Bornschein, S. Shabanian, A. Fischer, and Y. Bengio. Bidirectional helmholtz machines.
arXiv preprint arXiv:1506.03877, 2015.
8
[2] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio. Generating
sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
[3] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint
arXiv:1509.00519, 2015.
[4] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The Helmholtz machine. Neural
computation, 7(5):889?904, 1995.
[5] S. Dieleman, J. Schl?ter, C. Raffel, E. Olson, S. K. S?nderby, D. Nouri, A. van den Oord, and
E. B. and. Lasagne: First release., Aug. 2015.
[6] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for
image generation. arXiv preprint arXiv:1502.04623, 2015.
[7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[8] D. Kingma and J. Ba.
arXiv:1412.6980, 2014.
Adam: A method for stochastic optimization.
arXiv preprint
[9] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with
deep generative models. In Advances in Neural Information Processing Systems, 2014.
[10] D. P. Kingma and M. Welling.
arXiv:1312.6114, 2013.
Auto-encoding variational Bayes.
arXiv preprint
[11] B. M. Lake, R. R. Salakhutdinov, and J. Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in neural information processing systems, 2013.
[12] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In Computer Vision and Pattern Recognition. IEEE, 2004.
[13] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther. Auxiliary deep generative models.
Proceedings of the 33nd International Conference on Machine Learning, 2016.
[14] D. J. MacKay. Local minima, symmetry-breaking, and model pruning in variational free energy
minimization. Inference Group, Cavendish Laboratory, Cambridge, UK, 2001.
[15] T. Raiko, Y. Li, K. Cho, and Y. Bengio. Iterative neural autoregressive distribution estimator
NADE-k. In Advances in Neural Information Processing Systems, 2014.
[16] T. Raiko, H. Valpola, M. Harva, and J. Karhunen. Building blocks for variational Bayesian
learning of latent variable models. The Journal of Machine Learning Research, 8, 2007.
[17] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with
ladder networks. In Advances in Neural Information Processing Systems, 2015.
[18] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. arXiv preprint
arXiv:1505.05770, 2015.
[19] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate
inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[20] Theano Development Team. Theano: A Python framework for fast computation of mathematical
expressions. arXiv e-prints, abs/1605.02688, May 2016.
[21] D. Tran, R. Ranganath, and D. M. Blei. Variational Gaussian process. arXiv preprint
arXiv:1511.06499, 2015.
[22] H. Valpola. From neural PCA to deep unsupervised learning.
arXiv:1411.7783.
2015.
arXiv preprint
[23] G. van den Broeke. What auto-encoders could learn from brains - generation as feedback in
unsupervised deep learning and inference, 2016. MSc thesis, Aalto University, Finland.
[24] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv
preprint arXiv:1601.06759, 2016.
9
| 6275 |@word nd:1 d2:1 bn:5 shot:1 harder:1 recursively:5 contains:1 series:1 score:4 tuned:2 current:2 com:6 z2:4 nt:2 comparing:1 surprising:1 gmail:2 subsequent:1 wanted:1 plot:2 progressively:1 v:1 generative:44 parameterization:5 beginning:1 blei:1 provides:6 complication:1 five:6 wierstra:2 dn:2 bowman:1 mathematical:1 become:1 fitting:3 introduce:2 notably:1 expected:1 brain:1 salakhutdinov:2 decreasing:1 automatically:1 increasing:2 spain:1 notation:1 factorized:4 lowest:2 what:1 q2:1 finding:3 kldivergence:1 every:1 binarized:1 nf:2 uk:1 zl:4 unit:19 normally:1 utilization:1 danihelka:1 positive:1 before:1 understood:1 local:2 limit:1 encoding:1 subscript:1 approximately:1 might:4 studied:1 lasagne:2 challenging:1 limited:2 collapse:1 lecun:1 recursive:1 block:1 backpropagation:2 procedure:2 significantly:4 hyperbolic:1 matching:1 regular:1 cannot:1 onto:1 layered:2 put:1 collapsed:2 context:1 optimize:3 deterministic:9 raffel:1 dz:1 resembling:2 go:1 annealed:1 x32:1 iwae:4 splitting:1 correcting:2 estimator:1 d1:2 cavendish:1 hierarchy:7 vgp:2 trick:1 trend:1 helmholtz:2 recognition:3 approximated:4 nderby:5 bottom:15 observed:1 preprint:12 initializing:1 connected:1 rescaled:1 highest:4 substantial:1 topmost:2 complexity:1 trained:12 carrying:3 predictive:1 purely:6 creates:1 efficiency:1 completely:1 finetuning:2 chapter:1 alphabet:1 train:7 fast:1 effective:1 activate:1 monte:3 ole:1 zemel:1 kalchbrenner:1 valued:3 otherwise:2 encoder:2 fischer:1 jointly:1 final:2 bornschein:1 propose:3 reconstruction:2 interaction:4 tran:1 remainder:1 combining:3 olson:1 extending:1 produce:3 generating:1 adam:2 staying:1 object:2 help:1 recurrent:2 pose:1 propagating:1 schl:1 aug:1 eq:11 p2:2 auxiliary:3 implemented:3 predicted:1 indicate:1 differ:2 kaae:2 direction:1 lars:1 stochastic:25 viewing:1 alleviate:1 investigation:1 tighter:5 secondly:1 strictly:1 considered:2 normal:1 exp:1 presumably:1 mapping:2 predict:1 dieleman:1 finland:2 early:3 optimizer:1 achieves:2 resample:1 applicable:1 label:1 iw:5 honkala:1 saw:1 largest:1 weighted:13 minimization:1 clearly:1 activates:2 gaussian:17 reaching:2 resized:1 varying:1 vae:33 release:1 rezende:3 improvement:5 bernoulli:4 likelihood:34 aalto:3 baseline:1 inference:47 dependent:6 dayan:1 dataset8:1 initially:2 hidden:1 going:2 upward:1 pixel:1 flexible:2 retaining:1 development:1 art:3 constrained:1 mackay:1 field:1 having:1 sampling:1 biology:1 identical:1 unsupervised:3 future:1 report:1 few:2 simultaneously:1 divergence:4 individual:2 consisting:5 ab:1 mlp:10 highly:6 activated:1 orthogonal:1 unless:1 tree:1 continuing:1 circle:1 re:1 causal:1 theoretical:1 increased:1 column:1 deserves:1 introducing:2 dependency:3 encoders:1 combined:1 cho:1 winther:2 international:1 oord:2 probabilistic:2 corrects:3 together:1 again:1 thesis:1 huang:1 berglund:1 creating:1 toy:1 szegedy:1 li:1 dataset7:1 nonlinearities:2 coding:1 competitive:1 bayes:1 complicated:1 reparametrization:1 contribution:3 variance:3 correspond:1 bayesian:1 kavukcuoglu:1 none:1 ren:1 carlo:3 multiplying:1 lighting:1 explain:2 sharing:3 energy:1 mohamed:3 di:2 dataset:5 knowledge:1 color:2 improves:2 bidirectional:1 higher:6 supervised:4 improved:2 done:1 though:1 lastly:2 autoencoders:5 msc:1 expressive:3 gray:2 believe:2 building:1 effect:1 true:5 evolution:1 analytically:1 regularization:5 leibler:1 laboratory:1 i2:3 neal:1 white:1 during:7 noted:1 criterion:4 stress:1 complete:1 performs:3 temperature:1 image:6 variational:31 wise:2 novel:1 fi:1 recently:5 nouri:1 sigmoid:1 slight:1 refer:2 jozefowicz:1 cambridge:1 automatic:1 vanilla:2 mathematics:1 similarly:2 omniglot:7 centre:1 nonlinearity:1 specification:1 stable:1 something:1 posterior:7 own:2 recent:2 showed:2 optimizing:4 belongs:1 binary:2 seen:7 minimum:2 tapani:2 additional:5 casperkaae:3 dai:1 recognized:2 period:2 dashed:1 signal:2 semi:4 multiple:1 full:1 d0:1 technical:1 match:1 determination:1 x28:1 coded:1 ensuring:1 qi:1 multilayer:1 vision:1 expectation:1 arxiv:25 normalization:14 sometimes:1 addition:2 whereas:4 separately:2 uninformative:4 fine:2 source:1 crucial:1 rest:1 ascent:1 contrary:1 flow:5 ter:1 split:2 bengio:3 zi:28 identified:1 reduce:1 idea:3 shift:1 expression:1 pca:3 accelerating:1 akin:1 cause:2 compositional:1 deep:14 useful:3 generally:1 detailed:2 clear:1 tune:1 tenenbaum:1 http:3 sign:1 trapped:1 delta:1 estimated:1 overly:1 group:1 four:1 achieving:3 changing:1 preprocessed:1 utilize:2 graph:1 sum:1 angle:1 parameterized:1 powerful:1 fourth:1 master:1 wu:4 utilizes:2 lake:1 draw:2 appendix:1 comparable:2 capturing:1 layer:60 bound:9 followed:1 fold:1 activity:1 argument:1 pruned:2 performing:1 format:1 structured:3 department:3 according:2 combination:1 remain:1 slightly:1 beneficial:1 character:2 partitioned:1 shallow:1 making:1 den:3 gradually:4 restricted:1 theano:3 invariant:1 visualization:1 previously:2 abbreviated:1 tractable:1 end:3 maal:2 studying:1 available:1 yburda:1 observe:1 hierarchical:5 away:3 generic:1 batch:15 top:11 clustering:1 include:1 restrictive:1 especially:2 approximating:1 gregor:1 objective:2 move:1 already:1 print:1 gradient:2 unable:1 valpola:3 decoder:1 denmark:2 code:1 mini:1 rasmus:1 innovation:1 difficult:3 expense:1 stated:1 ba:1 diamond:1 shallower:1 observation:6 datasets:4 benchmark:1 descent:1 hinton:1 team:1 y1:2 introduced:3 inverting:1 copenhagen:1 kl:11 specified:1 z1:13 optimized:1 connection:1 sentence:1 learned:7 barcelona:1 kingma:3 nip:1 able:1 beyond:1 below:1 perception:1 pattern:1 dismissed:1 max:1 including:1 warm:15 regularized:1 turning:1 scheme:2 improve:5 github:4 ladder:7 dtu:2 lk:1 raiko:5 created:1 autoencoder:6 auto:3 prior:8 epoch:10 deepest:1 tangent:1 ltest:4 python:1 graf:1 fully:6 highlight:1 permutation:1 generation:2 downloaded:1 degree:2 consistent:1 principle:1 pi:1 casper:1 summary:1 last:1 free:1 deeper:5 perceptron:1 burda:1 explaining:1 leaky:1 sparse:1 distributed:7 benefit:2 van:3 calculated:3 dimension:1 depth:1 feedback:1 computes:1 autoregressive:1 qualitatively:5 welling:2 ranganath:1 approximate:14 emphasize:1 pruning:1 kullback:1 active:4 ioffe:1 norb:6 continuous:3 latent:35 iterative:3 table:6 learn:1 symmetry:1 improving:2 bottou:1 linearly:1 bounding:1 motivation:2 noise:1 nade:1 elaborate:1 grosse:1 precision:2 gwtaylor:1 harva:1 explicit:2 breaking:1 learns:3 down:8 removing:1 rectifier:1 covariate:1 dk:2 virtue:1 normalizing:5 intractable:2 mnist:9 merging:1 importance:16 illumination:1 conditioned:2 downward:1 karhunen:1 easier:1 simply:1 saddle:1 likely:2 prevents:1 lvae:36 vinyals:1 applies:1 olwi:1 corresponds:1 conditional:3 viewed:1 sorted:1 towards:2 shared:3 except:2 reducing:2 pas:3 invariance:1 vaes:15 indicating:1 internal:1 support:1 softplus:4 mark:1 vilnis:1 bioinformatics:1 relevance:1 tested:1 phenomenon:3 correlated:1 |
5,831 | 6,276 | Select-and-Sample for Spike-and-Slab Sparse Coding
Abdul-Saboor Sheikh
Technical University of Berlin, Germany,
and Cluster of Excellence Hearing4all
University of Oldenburg, Germany,
and SAP Innovation Center Network, Berlin
[email protected]
J?rg L?cke
Research Center Neurosensory Science
and Cluster of Excellence Hearing4all
and Dept. of Medical Physics and Acoustics
University of Oldenburg, Germany
[email protected]
Abstract
Probabilistic inference serves as a popular model for neural processing. It is
still unclear, however, how approximate probabilistic inference can be accurate
and scalable to very high-dimensional continuous latent spaces. Especially as
typical posteriors for sensory data can be expected to exhibit complex latent
dependencies including multiple modes. Here, we study an approach that can
ef?ciently be scaled while maintaining a richly structured posterior approximation
under these conditions. As example model we use spike-and-slab sparse coding for
V1 processing, and combine latent subspace selection with Gibbs sampling (selectand-sample). Unlike factored variational approaches, the method can maintain
large numbers of posterior modes and complex latent dependencies. Unlike pure
sampling, the method is scalable to very high-dimensional latent spaces. Among all
sparse coding approaches with non-trivial posterior approximations (MAP or ICAlike models), we report the largest-scale results. In applications we ?rstly verify the
approach by showing competitiveness in standard denoising benchmarks. Secondly,
we use its scalability to, for the ?rst time, study highly-overcomplete settings for
V1 encoding using sophisticated posterior representations. More generally, our
study shows that very accurate probabilistic inference for multi-modal posteriors
with complex dependencies is tractable, functionally desirable and consistent with
models for neural inference.
1
Introduction
The sensory data that enters our brain through our sensors has a high intrinsic dimensionality and
it is complex and ambiguous. Image patches or small snippets of sound, for instance, often do
not contain suf?cient information to identify edges or phonemes with high degrees of certainty.
Probabilistic models are therefore very well suited to maintain uncertainty encodings. Given an
image patch, for instance, high probabilities for an edge in one location impacts the probabilities
for other components resulting in complex dependencies commonly known as "explaining-away"
effects. Such dependencies in general include (anti-)correlations, higher-order dependencies and
multiple posterior modes (i.e., alternative interpretations of a patch). Furthermore, sensory data
is typically composed of many different elementary constituents (e.g., an image patch contains
some of a potentially very large number of components) resulting in sparse coding models aiming
at increasing overcompleteness [1]. If sensory data gives rise to complex posterior dependencies
and has high intrinsic dimensionality, how can we study inference and learning in such settings?
To date most studies, e.g. of V1 encoding models, have avoided the treatment of complex latent
dependencies by assuming standard sparse models with Laplace priors [2, 3, 1]; high-dimensional
problems can then be addressed by applying maximum a-posteriori (MAP) approximations for the
resulting mono-modal posteriors. Other scalable approaches such as independent component analysis
(ICA) or singular value decomposition (K-SVD) [4, 5] do not encode for data uncertainty, which
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
avoids posterior estimations altogether. For advanced data models, which we expect to be required,
e.g., for visual data, neither MAP nor a non-probabilistic treatment can be expected to be suf?cient.
It was, for example, in a number of studies shown that sparse coding models with more ?exible
spike-and-slab priors are (A) more closely aligned with the true generative process e.g. for images,
and are (B) resulting in improved functional performance [6, 7]. Spike-and-slab priors do, however,
result in posteriors with complex dependencies including many modes [8, 7]. Inference w.r.t. to
spike-and-slab sparse coding is therefore well suited to, in general, study ef?cient inference and
learning with complex posteriors in high-dimensions. Results for spike-and-slab sparse coding are,
furthermore, of direct interest for other important models such as hierarchical communities of experts
[9], deep Boltzmann Machines (see [6]), or convolutional neural networks [10]. Also for these
typically deep systems very high-dimensional inference and learning is of crucial importance.
So far, intractable inference for spike-and-slab sparse coding was approximated using sampling or
factored variational approaches. While sampling approaches can in principle model any dependencies
including multiple modes, they have been found challenging to train at scale, with the largest scale
applications going to few hundred of latents [11, 12]. Compared to sampling, approaches using
factored variational approximations can not model as complex posterior dependencies because they
assume posterior independence (no correlations etc), however, they can capture multiple modes
and are scalable to several hundreds up to thousands of latents [8, 6]. In this work we combine the
accuracy of sampling approaches and the scalability of variational approaches by applying select-andsample [13] to scale spike-and-slab sparse coding to very high latent dimensions. In contrast to using
a factored approximation, we here select low dimensional subspaces of the continuous hidden space,
and then apply sampling to approximate posteriors within these lower dimensional spaces.
2
The Spike-and-Slab Sparse Coding Model and Parameter Optimization
The spike-and-slab sparse coding model (see [8, 6] and citations therein) used for our study assumes
a Bernoulli prior over all H components of the the binary latent vector ?b ? {0, 1}H , with a Gaussian
prior (the ?slab?) for the continuous latent vector ?z ? RH :
?
? bh
? (1 ? ?)1?bh , p(?z | ?) =
N (zh ; ?h , ? 2 ),
(1)
p(?b | ?) =
h
h
h
? ? RH parameterize
where ? de?nes the probability of bh being equal to one and where ?
? ? RH and ?
H
the Gaussian slab. A spike-and-slab hidden variable ?s ? R is then generated by a pointwise
multiplication: ?s = (?b ? ?z ), i.e., sh = bh zh . Given the hidden variable ?s, we follow standard sparse
? ?
coding by linearly superimposing a set of latent components (i.e., W ?s = h W
h sh ) to initialize the
mean of a Gaussian noise model:
?
?
p(?y | ?s, ?) = d N (yd ; h Wdh sh , ?d2 ),
(2)
which then generates us the observed data ?y ? RD . Here the columns of the matrix W ? RD?H
? h that is associated with a spike-and-slab latent variable sh . We use
are each a latent component W
D
?? ? R to parameterize the observation noise. The parameters of the generative model (1) to (2)
? W, ?? ). To ?nd the values of ?, we seek to maximize the data
are together denoted by ? = (?, ?
? , ?,
?N
(n)
likelihood L = n=1 p(?y
| ?) under the spike-and-slab data model and given a set of N data
points {?y (n) }n=1,...,N . To derive a learning algorithm, we apply expectation maximization (EM) in
its free-energy formulation. In our case the free-energy is given by:
N ?
?
?
?
F(q, ?) =
log p(?y (n) , ?s | ?) n + H(q (n) ), where ? f (?s) ?n = q (n) (?s) f (?s)d ?s
n=1
is the expectation under q (n) , a distribution over the latent space and H( ? ) denotes the Shannon
entropy. Given the free-energy, the parameter updates are canonically derived by setting the partial
derivatives of F(q, ?) w.r.t. the parameters to zero. For the spike-and-slab sparse coding model (1)
and (2), we obtain (similar to [8, 6, 7]) the following closed-form M-step equations:
?
? ?
2
1 ??? ?
n (sh ? ?h bh ) n
2
? ? ?
?=
,
(3)
|b| n , ?h =
NH n
n bh n
? (n) ? ?T
? ? ?
?s n
?s
y
1 ?? ?
(n) ?
n?
W = ? ? T? , ?
? = ?n ? ?n , and ?d2 =
Wdh sh ? yd )2 n
(4)
(
?
N n
s?s n
n ?
n b
h
n
2
with sh = bh zh and |?x| =
3
?
h
xh as de?ned above.
Approximate Inference With Select-and-Sample
The optimal choices for the distributions q (n) (?s) for the expectations in (3) and (4) are the posteriors
p(?s | ?y (n) , ?), but neither the posteriors nor their corresponding expectation values are computationally tractable for high dimensions. However, a crucial observation that we exploit in our work is
that for observed data such as natural sensory input or data generated by a sparse coding model, the
activity of latent components (or causes) can be expected to be concentrated in low-dimensional
subspaces. In other words, for a given observed data point, all except for a very small fraction
of the latent components can be assumed to be non-causal or irrelevant, hence the corresponding
latent space can be neglected for the integration over ?s. For a sparse initiation (i.e., ? ? 1) of the
spike-and-slab model (1) to (2), we consider such low dimensional subspaces to be spanned by a few
(approximately ?H) of the H latent space coordinates. If we denote by J (n) the subspace containing
the large majority of posterior mass for a given data point ?y (n) , an approximation to p(?s | ?y (n) , ?) is
then given by the following truncated distribution:
q (n) (?s; ?) =?
p(?s | ?y (n) , ?)
?
p(?s | ?y
(n)
, ?) d?s
?
?(?s ? J (n) ),
(5)
?
s ? ?J (n)
where ?(?s ? J (n) ) is an indicator function, taking the value ?(?s ? J (n) ) = 1 only if ?s ? J (n)
and zero otherwise. Truncated approximations have previously been shown to work ef?ciently and
accurately for challenging data models [14, 15, 16]. Latents were restricted to be binary, however,
and scalability was previously limited by the combinatorics within the selected latent subsets. For
our aim of very large scale applications, we therefore apply the select-and-sample approach [13] and
use a sampling approximation that operates within the subspaces J (n) . Unlike [13] who used binary
latents, we here apply the approximation to the continuous latent space of spike-and-slab sparse
coding. Formally, this means that we ?rst use the posterior approximation q (n) (?s) in Eqn. 5 and then
approximate the expectation values w.r.t. q (n) (?s) using sampling (see illustration of Alg. 1):
? f (?s) ?n =
?
q (n) (?s) f (?s) d?s ?
M
1 ?
f (?s (m) ), where ?s (m) ? q (n) (?s) ,
M m=1
(6)
M is the number of samples and f (?s) can be any argument of the expectation values in (3) and (4).
It remains to be shown how dif?cult sampling from q (n) (?s) is compared to directly sampling
form the full posterior p(?s | ?y (n) , ?). The index function ?(?s ? J (n) ) means that we can clamp
all values of ?s to zero but we have to answer the question how the remaining sh are sampled. A
closer analysis of the problem shows that the distribution to sample in the reduced space is given
by the posterior w.r.t. a truncated generative model. To show this, let us ?rst introduce some
notation: Let us denote by I a subset of the indices of the latent variables ?s, i.e., I ? {1, . . . , H},
and let us use H\I as an abbreviation for {1, . . . , H}\I. The vector ?sI w.r.t. I is then, as
customary, a vector in R|I| de?ned by those entries sh with h ? I. In analogy, we take a matrix WI ? RD?|I| to be de?ned by row vectors (w
? dT )I where w
? dT are the row vectors of W ? RD?H .
Proposition 1. Consider the spike-and-slab generative model (1) to (2) with parameters ?,
?I (n) , WI (n) , ?? ) be the parameters of a truncated spike-and-slab model
? I (n) , ?
and let ?I (n) = (?, ?
?
(n)
with H = dim(I ) dimensional latent space. Then it applies that sampling from the truncated
distribution in (5) is equivalent to sampling from the posterior p(?sI (n) | ?y (n) , ?I (n) ) of the truncated
spike-and-slab model, while all values sh with h ?? I (n) are clamped to zero.
Proof. If I (n) denotes the indices of those latents sh that span the subspace in which
the posterior mass of p(?s | ?y (n) , ?) is concentrated, then these subsets are given by
?
J (n) = {?s ? RH | ?sH\I (n) = ?0}, i.e., ?(?s ? J (n) ) can be rewritten as h??I (n) ?(sh = 0).
Considering (5), we can therefore set the corresponding values ?sH\I (n) = ?0. We now drop the
3
superscript n for readability and ?rst derive:
p(?sI , ?sH\I = ?0, ?y | ?)
= N (?y ; WI ?sI + WH\I?0, ?? )
??
Bern(bh ; ?)N (zh ; ?h , ?h )
?? ?
Bern(bh = 0; ?)N (zh ; ?h , ?h )
h??I
h?I
= p(?sI , ?y | ?I ) U (?sH\I = ?0, ?) with U (?sH\I , ?) = p(?sH\I | ?H\I ),
i.e., the joint with ?sH\I = ?0 is given by the joint of the truncated model multiplied by a term not
depending on ?sI such that:
p(?sI , ?sH\I = ?0, ?y | ?) ?(?s ? J )
p(?sI , ?y | ?I ) U (?sH\I = ?0, ?) ?(?s ? J )
q(?s; ?) = ?
=?
?
p(?sI? , ?sH\I
= ?0, ?y | ?) d?s ?
p(?sI? , ?y | ?I ) d?s ? U (?sH\I = ?0, ?)
?
s ? ?J
=?
?
s?
p(?sI , ?y | ?I )
p(?sI? , ?y | ?I ) d?sI?
?
h??I
?
s ? ?J
?(sh = 0) = p(?sI | ?y , ?I )
?
?(sh = 0) .
(7)
h??I
?
Following the proof, Proposition 1 applies for any generative model p(?s, ?y | ?) for which applies
p(?sI , ?sH\I = ?0, ?y | ?) = p(?sI , ?y | ?I ) U (?sH\I = ?0, ?y , ?). This includes a large class of models
such as linear and non-linear spike-and-slab models, and potentially hierarchical models such as
SBNs. Proposition 1 does not apply in general, however (we exploit speci?c model properties).
Sampling. In previous work [7], posteriors for spike-and-slab sparse coding have been evaluated
exhaustively within selected I (n) which resulted in scalability to be strongly limited by the dimensionality of I (n) . Based on Proposition 1, we can now overcome this bottleneck by using sampling
approximations within the subspaces J (n) , and we have shown that such sampling is equivalent to
sampling w.r.t. to a much lower dimensional spike-and-slab model. The dimensionality of J (n) is
still non-trivial, however, and we use a Markov chain Monte Carlo (MCMC) approach, namely Gibbs
sampling for ef?cient scalability. Following Proposition 1 we derive a sampler for the spike-and-slab
model (1) to (2) and later apply it for the needed (low) dimensionality.
While the result of sampling from posteriors of truncated models applies for a broad class of spikeand-slab models (Proposition 1), we can here exploit a further speci?c property of the model (1) to
(2). As has previously been observed and exploited in different contexts [8, 12, 17], the Gaussian
slab and the Gaussian noise model can be combined using Gaussian identities such that integrals over
the continuous latents ?z are solvable analytically. Here we can use this observation for the derivation
of a Gibbs sampler. For this we ?rst devise a latent variable Markov chain such that its target density
is given by the following conditional posterior distribution:
?
p(yd |sh , ?sH\h , ?)
p(sh |?sH\h , ?y , ?) ? p(sh |?)
?
? ?
d
? h ) + ? N (sh ; ?h , ? 2 )
= (1 ? ?) ?(s
N (sh ; ?d , ?2d ) ,
(8)
h
d
?
where
is the Dirac delta to represent the spike at zero and where ?d = (yd ?
?(.)
?
2
2
2
W
?
dh? sh? )/Wdh and ?d = ?d /Wdh . Using Gaussian identities we obtain:
h \h
?
?
? h ) + ? N (sh ; ?, ? 2 ) ,
p(sh |?sH\h , ?y , ?) ? (1 ? ?) N (sh ; ?, ?2 ) ?(s
(9)
?
?
where ? = ?2 d ?d /?2d and ?2 = ( d 1/?2d )?1 , whereas ? = ? 2 (?/?2 + ?h /?h2 ) and ? 2 =
2
2 ?1
(1/? + 1/?h ) . We can observe that the conditional posterior (9) of sh retains the form of a
spike-and-slab distribution. We can therefore simply compute the cumulative distribution function
(CDF) of (9) to simulate sh from the exact conditional distribution (sh ? p(sh |?sH\h , ?y , ?)) by means
of inverse transform sampling.
Selecting. The Gibbs sampler can now be applied to generate posterior samples for a truncated
spike-and-slab model (de?ned using parameters ?I (n) ). We also obtain a valid approximation, of
course, without selection (I = {1,. . . ,H}) but MCMC samplers in very high dimensional spaces
4
?
Algorithm 1: Select-and-sample for spike-and-slab sparse coding (S5C)
init ?;
repeat
for (n = 1, ..., N ) do
for (h = 1, ..., H) do
compute Sh (?y (n) ) as in (10);
de?ne I (n) as in (11);
for (m = 1, . . . , M ) do
(m)
draw ?sI (n) ? p(?sI (n) | ?y (n) , ?I (n) ) using (9);
?M
2
compute ? f (?s) ?n = M
s (m) );
m= M +1 f (?
2
compute M-step with arguments f (?s) as in (3) and (4);
until (until ? have converged);
Illustration of general application.
with complex posterior structure are known to be challenging (convergence to target distributions
can be very slow). The problems typically increase superlinearly with hidden dimensionality but
for intermediate dimensions, a Gibbs sampler can be very fast and accurate. By using subspaces
J (n) with intermediate dimensionality, therefore, results in very ef?cient and accurate sampling
approximations within these spaces. An overall very accurate approximation is then obtained if the
subspaces are well selected and if they do contain the large majority of posterior mass. By using
exact EM it was indeed previously shown for spike-and-slab sparse coding [7] that almost all posterior
mass, e.g., for naturally mixed sound sources, is concentrated in collections of low-dimensional
subspaces (also compare [18]). To de?ne a subspace J (n) given a data point ?y (n) , we follow earlier
approaches [15, 14] and ?rst de?ne an ef?ciently computable selection function to choose those
latents that are the most likely to have generated the data point. We use the selection function in [7]
which is given by:
?
(n)
2
Sh (?y (n) , ?) = d N (?yd ; Wdh ?h , ?d + Wdh
/?h ) ? p(?y (n) | ?b = ?bh , ?),
(10)
where ?bh represents a singleton state with only component h being equal to one. The subsets are then
de?ned as follows:
I (n) is the set of H ? indices such that ?h ? I (n) ?h? ?? I (n) : Sh (?y (n) , ?) > Sh? (?y (n) , ?). (11)
We then use J (n) = {?s | ?sH\I (n) = ?0} as above. In contrast to previous approaches with H ? typically
< 10, H ? can be chosen relatively large here because the Gibbs sampler is still very ef?cient and
precise for H ? > 10 (we will go up H ? = 40).
By combining selection procedure and the Gibbs sampler using Proposition 1, we obtain the ef?cient
approximate EM algorithm summarized in Alg. 1. It will be referred to as S5C (see Alg. 1 caption).
Note that we will, for all experiments, always discard the ?rst half of the drawn samples as burn-in.
4
Numerical Experiments
In all the experiments, the initial values of ? were drawn from a uniform distribution on the interval
[0.1, 0.5] (i.e., intermediately sparse), ?
? was initialized with normally distributed random values,
?h was set to 1 and ?d was initialized with the standard deviation of yd . The elements of W were
iid drawn from a normal distribution with zero mean and a standard deviation of 5.0. We used a
multi-core parallelized implementation and executed the algorithm on up to 1000 CPU cores.
Veri?cation of functional accuracy. We ?rst investigate the accuracy and convergence properties
of our method on ground-truth data which was generated by the spike-and-slab data model (1) and
? h in the form
(2) itself. We used H = 10 hidden variables and D = 5 ? 5 and generative ?elds W
of ?ve horizontal and ?ve vertical bars. As is customary for such bars like data (e.g., [15] and cites
therein) we take each ?eld to contribute to a data point with probability ? = H2 . We then randomly
make each of the 5 vertical and 5 horizontal bars positive or negative by assigning them a value of 5
5
Figure 1: Functional accuracy of S5C. A Arti?cial ground-truth data. B Likelihoods during learning
(Alg. 1) for different H ? . C Denoising performance of S5C on the ?house? benchmark as used for
other methods (MTMKL [8], K-SVD [4], Beta process [11] and GSC-ET [7]. Bold values highlight
the best performing algorithm. ? Value not bold-faced as noise variance is assumed known a-priori[4].
D Top: Noisy image with ? = 25. Bottom: State-of-the art denoising result after S5C was applied.
or ?5, while the non-bar pixels are assigned zero value. The parameters of the latent slabs ?h and
?h are set to 0.0 and 1.0, respectively, and we set the observation noise to ?d = 1.0. We generate
N = 5000 data points with this setting (see Fig. 1A for examples).
We apply the S5C algorithm (Alg. 1) with H = 10 latents and M = 40 samples per data point and
use two settings for preselection: (A) no preselection (H ? = H = 10) and (B) subspace preselection
using H ? = 5. We did ten runs per setting using different initializations per run as above. For setting
(A), i.e. pure Gibbs sampling, the algorithm recovered, after 150 EM iterations, the generating bars
in 2 of the 10 runs. For setting (B) convergence was faster and in 9 of the 10 runs all bars were
recovered after 50 EM iterations. Fig. 1B shows for all 20 runs likelihoods during learning (which
are still tractable for H = 10). These empirical results show the same effect for a continuous latent
variable model as was previously reported for non-continuous latents [19, 20]: preselection helps
avoiding local optima (presumably because poor non-sparse solutions are destabilized using subspace
selection).
After having veri?ed the functioning of S5C on arti?cial data, we turned to verifying the approach on
a denoising benchmark, which is standard for sparse coding. We applied S5C using a noisy ?house?
image [following 11, 4, 8, 7]. We used three different levels of added Gaussian noise (? = 15, 25, 50).
For each setting we extract 8 ? 8 patches from 256 ? 256 noisy image, visiting a whole grid of
250 ? 250 pixels by shifting (vertically and horizontally) 1 pixel at a time. In total we obtained
N = 62, 001 overlapping image patches as data points. We applied the S5C algorithm with H = 256,
select subspaces with H ? = 40 and used M = 100 samples per subspace. Fig. 1C,D show the
obtained results and a comparison to alternative approaches. As can be observed, S5C is competitive
to other approaches and results in higher peak signal-to-noise ratios (PSNRs) (see [7] for details)
than, e.g., K-SVD or factored variational EM approaches (MTMKL) for ? = 25 and 50. Even though
S5C uses the additional sampling approximation in the selected subspaces, it is also competitive to
ET-GSC [7], which is less ef?cient as it sums exhaustively within subspaces. For ? = 25 S5C even
outperforms ET-GSC presumably because S5C allows for selecting larger subspaces. In general we
observed increased improvement with the number of samples, but improvements with H saturated
after about H = 256.
Large-scale application and V1 encoding. Since sparse coding was ?rst suggested as coding model
for primary visual cortex [21], a main goal has been its application to very high latent dimensions
because V1 is believed to be highly overcomplete [1]. Furthermore, for very large hidden dimensions,
non-standard generative ?elds were observed [1], a ?nding which is of signi?cant relevance for
the ongoing debate of how and where increasingly complex structures in the visual system may be
processed. Here we applied S5C with H = 10 000 hidden dimensions to demonstrate scalability
of the method, and to study highly-overcomplete V1 encoding based on a posterior approximation
capturing rich structure. For our application we used the standard van Hateren database [22], extracted
N = 106 image patches of size 16 ? 16, and applied pseudo-whitening following [21]. We applied
6
Figure 2: Selection of different types
of generative ?elds as learned by S5C
using H = 10, 000 latent dimensions
(see Suppl. for all ?elds). Gabor-like
?elds are the most frequent type (Gabors,
ridgelets, gratings), followed by globular ?elds, curved ?elds and corner-like
?elds. We also observed textures other
than gratings. Gabors, curved and corner ?elds were almost all among the 30% most frequently activated ?elds. Ridgelets, globular ?elds and gratings were typically among the 30-80% most used ?elds.
S5C for 50 EM iterations to the data using H ? = 20 dimensional subspaces and M = 50 samples
per data point. After learning we observed a large number of generative ?elds specialized to image
components. Like in recent large-scale applications of standard sparse coding [1] we found ?elds
that did not specialize (about 1% in [1] and about 12% for S5C). The higher percentage for S5C may
be due to the ?ve-fold higher dimensionality used here. For the ?elds specialized to components,
we observed a large number of Gabor-like ?elds including ridgelets and gratings (names follow [1]).
Furthermore, we observed globular ?elds that have been observed experimentally [23] and are subject
of a number of recent theoretical studies (e.g., [14, 3]). Notably, we also observed a number of curved
?elds and ?elds sensitive to corner-like structures (Fig. 2 shows some examples). Curved ?elds
have so far only been described to emerge from sparse coding once before [1] and for convolutional
sparse coding in two cases [24, 25] (to the knowledge of the authors) but have been suggested for
technical applications much earlier [26] (a link that was not made, so far). Corner-like structures
have previously not been observed for sparse coding presumably because of lower dimensional latent
spaces (also not in [1] but compare convolutional extensions [24, 16, 25]). The numbers of curved (a
few hundred) and corner-like ?elds (a few tens) are small but we almost exclusively ?nd those ?elds
among the 20% most frequently used ?elds (we order according to average approx. posterior, see
supplement). Neural responses to corner-like sensitivities are typically associated with higher-level
processing in the visual pathway. Our results may be evidence for such structures to emerge together,
e.g., with Gabors for very large latent dimensionality (as expected for V1). In general, the statistics
of generative ?eld shapes can be in?uenced by many factors including preprocessing details, sparsity,
local optima or details of the learning algorithms. However, because of the applied approximation,
S5C can avoid the for MAP based approaches required choice of the sparsity penalty [1]. Instead we
statistically infer the sparsity level which is well interpretable for hard sparsity, and which corresponds
for our application to H? = 6.2 components per patch (also compare [14, 20]). In the supplement
we provide the full set of the H = 10 000 learned generative ?elds.
Figure 3: The y-axis shows
the highest reported latent
dimensionality for different
sparse coding algorithms
(cont. latents), and the xaxis the accuracy of posterior approximations. Within
each column, entries are ordered (left-to-right) w.r.t. the
publication year. 1st column: Sparse coding systems
using one latent state for inference (eg., MAP-like [27,
28, 1] or SAILnet [3] or KSVD [4, 5]). 2nd: Approximate posteriors in the form
of factored variational distributions that can capture multiple modes but assume posterior independence among the latents sh
(MTMKL [8], S3C [6]). 3rd: Sampling based approximations [11, 12] and truncated approximations
(ssMCA [20], GSC-ET [7]) that capture multiple posterior modes and complex latent dependencies.
Following [6] we also included ssRBM for comparison. 4th: Full posterior with exact EM [17].
7
5
Discussion
In this study we have applied a select-and-sample approach [13] to derive and study an approximate
EM algorithm applicable to models with very large-scale latent spaces. Select-and-sample combines
sampling with subspace preselection [15, 14] and has previously been applied as model for neural inference using binary latents [13]. Furthermore, it has been used to overcome analytical intractabilities
of a non-linear sparse coding model [20]. Here, we for the ?rst time apply select-and-sample to scale
a standard linear sparse coding model with spike-and-slab prior up to very large hidden dimensions.
Spike-and-slab sparse coding is hereby not only more expressive than standard Laplace or binary
priors [8, 12, 7, 20] but results in properties that we can exploit for our approximation. We have thus
analytically shown (Proposition 1) that select-and-sample is applicable to a large class of models with
hard sparsity (giving justi?cation also to earlier applications [20]).
Empirically, we have, ?rstly, shown that select-and-sample for spike-and-slab sparse coding (S5C)
maintains the functional competitiveness of alternative approaches (Fig. 1). Secondly, we demonstrated ef?ciency by scaling S5C up to very high-dimensional latent spaces (we go up to 10 000). For
comparison, Fig. 3 shows the largest reported latent spaces of different sparse coding approaches
depending on the posterior structure that can be captured. Non-probabilistic approaches (e.g., K-SVD
[4, 5]) are known to scale relatively well, and, likewise, approaches using MAP approximations
[2, 3, 1] have been shown to be applicable to large scales. None of these approaches captures
posterior dependencies or multiple posterior modes given a data point, however. Factored variational
approaches can be scaled to very high-dimensional latent spaces and can capture multiple posterior
modes. No latent dependencies in the posterior are modeled, however, which has previously been
reported to result in disadvantageous behavior (e.g. [29, 7]). In contrast to MAP-based or factored
approaches, sampling approaches can model both multiple posterior modes and complex latent
dependencies. Some models hereby additionally include a more Bayesian treatment of parameters
[11, 12] (also compare [8]) which can be considered more general than approaches followed in
other work (see Fig. 3). The scalability of sampling based approaches has been limited, however.
Among those models capturing the crucial posterior structure, S5C shows, to the knowledge of the
authors, the largest scale applicability. This is even the case if approaches using factored posteriors
are included. Notably there is also little reported for very large hidden dimensions for MAP based
or deterministic approaches (compare, e.g., [5]), although scalability should be less of an issue. In
general it may well be that a method is scalable to larger than the reported latent spaces but that such
increases do not result in functional bene?ts.
For probabilistic approaches, the requirement for approximations with high accuracy have been
identi?ed also in other very promising work [30, 31] which uses different approaches that were,
so far, applied to much smaller scales. For the select-and-sample method and the spike-and-slab
sparse coding model, the high-dimensional applicability means that this or similar approaches are
a promising candidate for models such as DBNs, SBNs or CNNs because of their close relation to
spike-and-slab models and their typically similarly large scale settings. Here we have studied an
application of S5C to standard image patches, primarily to demonstrate scalability. The obtained
non-standard generative ?elds may by themselves, however, be of relevance for V1 encoding (Fig. 2)
and they show that spike-and-slab models may be very suitable generalized V1 models. From a
probabilistic view on neural processing, the accuracy that can be provided by select-and-sample
inference is hereby very desirable and is consistent, e.g., with sampling-based interpretations of neural
variability [32]. Here we have shown that such probabilistic approximations are also functionally
competitive and scalable to very large hidden dimensions.
Acknowledgements. We thank E. Guiraud for help with Alg. 1 (illustration) and acknowledge
funding by the DFG: Cluster of Excellence EXC 1077/1 (Hearing4all) and grant LU 1196/5-1.
References
[1] B. Olshausen. Highly overcomplete sparse coding. In Proc. SPIE, 8651, 2013.
[2] B. Olshausen and D. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1?
Vision Research, 37(23):3311?3325, 1997.
[3] J. Zylberberg, J. Murphy, and M. DeWeese. A Sparse Coding Model with Synaptically Local Plasticity
and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields. PLoS
Comp. Bio., 7(10):e1002250, 2011.
[4] H. Li and F. Liu. Image denoising via sparse and redundant representations over learned dictionaries in
wavelet domain. In ICIG, pages 754?758, 2009.
8
[5] A. Mensch, J. Mairal, B. Thirion, and G. Varoquaux. Dictionary learning for massive matrix factorization.
ICML, 2016.
[6] I. J. Goodfellow, A. Courville, and Y. Bengio. Scaling up spike-and-slab models for unsupervised feature
learning. TPAMI, 35(8):1902?1914, 2013.
[7] A. S. Sheikh, J. A. Shelton, and J. L?cke. A truncated EM approach for spike-and-slab sparse coding.
JMLR, 15:2653?2687, 2014.
[8] M. Titsias and M. Lazaro-Gredilla. Spike and slab variational inference for multi-task and multiple kernel
learning. In NIPS, pages 2339?2347, 2011.
[9] G. E. Hinton, B. Sallans, and Z. Ghahramani. A hierarchical community of experts. In Learning in
graphical models, pages 479?494. Springer, 1998.
[10] Ankit B Patel, Tan Nguyen, and Richard G Baraniuk. A probabilistic theory of deep learning. In Advances
in Neural Information Processing Systems (NIPS), 2016. in press, preprint arXiv:1504.00641.
[11] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary
learning for sparse image representations. In NIPS, pages 2295?2303, 2009.
[12] S. Mohamed, K. Heller, and Z. Ghahramani. Evaluating Bayesian and L1 approaches for sparse unsupervised learning. In ICML, 2012.
[13] J. Shelton, J. Bornschein, A.-S. Sheikh, P. Berkes, and J. L?cke. Select and sample ? a model of ef?cient
neural inference and learning. In NIPS, pages 2618?2626, 2011.
[14] G. Puertas, J. Bornschein, and J. L?cke. The maximal causes of natural scenes are edge ?lters. In NIPS,
volume 23, pages 1939?47, 2010.
[15] J. L?cke and J. Eggert. Expectation truncation and the bene?ts of preselection in training generative
models. JMLR, 11:2855?2900, 2010.
[16] Z. Dai, G. Exarchakis, and J. L?cke. What are the invariant occlusive components of image patches? A
probabilistic generative approach. In NIPS 26, pages 243?251. 2013.
[17] J. L?cke and A.-S. Sheikh. Closed-form EM for sparse coding and its application to source separation. In
LVA, pages 213?221, 2012.
[18] K. Schnass. Local identi?cation of overcomplete dictionaries. JMLR, 16:1211?1242, 2015.
[19] G. Exarchakis, M. Henniges, J. Eggert, and J. L?cke. Ternary sparse coding. In LVA/ICA, pages 204?212,
2012.
[20] J. A. Shelton, A. S. Sheikh, J. Bornschein, P. Sterne, and J. L?cke. Nonlinear spike-and-slab sparse coding
for interpretable image encoding. PLoS ONE, 10:e0124088, 05 2015.
[21] B. Olshausen and D. Field. Emergence of simple-cell receptive ?eld properties by learning a sparse code
for natural images. Nature, 381:607?609, 1996.
[22] J. H. van Hateren and A. van der Schaaf. Independent component ?lters of natural images compared with
simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265:359 ? 366, 1998.
[23] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive ?elds in macaque primary visual
cortex. Journal of Neurophysiology, 88:455 ? 463, 2002.
[24] P. Jost, P. Vandergheynst, S. Lesage, and R. Gribonval. Motif: an ef?cient algorithm for learning translation
invariant dictionaries. In IEEE Int. Conf. Acoustics Speech and Sig. Processing, volume 5, 2006.
[25] J. Mairal, F. Bach, and J. Ponce. Sparse modeling for image and vision processing. Foundations and
Trends in Computer Graphics and Vision, 8(2-3):85?283, 2014.
[26] N. Kr?ger and G. Peters. Object recognition with banana wavelets. In Eur. Symp. ANNs, 1997.
[27] P. Garrigues and B. A. Olshausen. Learning horizontal connections in a sparse coding model of natural
images. In NIPS, 2007.
[28] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector
quantization. In Proc. ICML, pages 921?928, 2011.
[29] A. Ilin and H. Valpola. On the effect of the form of the posterior approximation in variational learning of
ICA models. Neural Processing Letters, 22(2):183?204, 2005.
[30] T. Salimans, D. Kingma, and M. Welling. Markov chain monte carlo and variational inference: Bridging
the gap. ICML, 2015.
[31] D. Rezende and S. Mohamed. Variational inference with normalizing ?ows. ICML, 2015.
[32] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Spontaneous Cortical Activity Reveals Hallmarks of an
Optimal Internal Model of the Environment. Science, 331(6013):83?87, January 2011.
9
| 6276 |@word neurophysiology:1 nd:3 d2:2 seek:1 decomposition:1 arti:2 eld:29 garrigues:1 initial:1 liu:1 contains:1 exclusively:1 oldenburg:2 selecting:2 outperforms:1 recovered:2 com:1 si:18 gmail:1 assigning:1 numerical:1 cant:1 shape:2 plasticity:1 drop:1 interpretable:2 update:1 generative:14 selected:4 half:1 cult:1 core:2 gribonval:1 contribute:1 location:1 readability:1 direct:1 beta:1 competitiveness:2 specialize:1 ksvd:1 ilin:1 combine:3 pathway:1 symp:1 introduce:1 excellence:3 notably:2 ica:3 indeed:1 expected:4 themselves:1 nor:2 frequently:2 multi:3 brain:1 behavior:1 cpu:1 little:1 considering:1 increasing:1 spain:1 provided:1 notation:1 mass:4 occlusive:1 what:1 superlinearly:1 certainty:1 cial:2 pseudo:1 sapiro:1 scaled:2 bio:1 normally:1 medical:1 grant:1 positive:1 before:1 local:4 vertically:1 aiming:1 encoding:8 yd:6 approximately:1 burn:1 therein:2 initialization:1 studied:1 challenging:3 dif:1 limited:3 factorization:1 statistically:1 hearing4all:3 ternary:1 procedure:1 empirical:1 gabor:5 word:1 close:1 selection:7 bh:11 context:1 applying:2 equivalent:2 map:8 demonstrated:1 center:2 deterministic:1 go:2 lva:2 joerg:1 pure:2 factored:9 spanned:1 uenced:1 coordinate:1 laplace:2 target:2 dbns:1 tan:1 massive:1 exact:3 caption:1 spontaneous:1 us:2 goodfellow:1 sig:1 element:1 trend:1 approximated:1 recognition:1 database:1 observed:14 bottom:1 preprint:1 enters:1 capture:5 parameterize:2 thousand:1 verifying:1 plo:2 highest:1 environment:1 neglected:1 exhaustively:2 titsias:1 basis:1 joint:2 derivation:1 train:1 fast:1 london:1 monte:2 larger:2 otherwise:1 statistic:1 transform:1 itself:1 noisy:3 superscript:1 ankit:1 emergence:1 tpami:1 analytical:1 bornschein:3 clamp:1 maximal:1 frequent:1 aligned:1 combining:1 turned:1 date:1 canonically:1 dirac:1 scalability:9 constituent:1 rst:10 convergence:3 cluster:3 optimum:2 requirement:1 generating:1 object:1 help:2 derive:4 depending:2 grating:4 wdh:6 signi:1 closely:1 cnns:1 globular:3 proposition:8 varoquaux:1 elementary:1 secondly:2 extension:1 considered:1 ground:2 normal:1 presumably:3 slab:43 dictionary:5 estimation:1 proc:2 applicable:3 sensitive:1 largest:4 overcompleteness:1 sensor:1 gaussian:8 always:1 aim:1 avoid:1 zhou:1 publication:1 encode:1 derived:1 rezende:1 ponce:1 improvement:2 bernoulli:1 likelihood:3 contrast:3 cke:9 inference:17 posteriori:1 dim:1 motif:1 typically:7 hidden:10 relation:1 going:1 germany:3 pixel:3 overall:1 among:6 issue:1 denoted:1 priori:1 art:1 integration:1 initialize:1 schaaf:1 spatial:1 equal:2 once:1 field:3 having:1 ng:1 sampling:28 represents:1 broad:1 icml:5 unsupervised:2 carin:1 report:1 destabilized:1 few:4 primarily:1 richard:1 randomly:1 composed:1 resulted:1 ve:3 dfg:1 murphy:1 maintain:2 interest:1 highly:4 investigate:1 saturated:1 sh:50 activated:1 xaxis:1 chain:3 accurate:5 edge:3 closer:1 partial:1 integral:1 initialized:2 causal:1 overcomplete:6 theoretical:1 instance:2 column:3 earlier:3 increased:1 modeling:1 retains:1 maximization:1 applicability:2 deviation:2 subset:4 latents:12 entry:2 hundred:3 uniform:1 graphic:1 reported:6 dependency:15 answer:1 lazaro:1 combined:1 eur:1 st:1 density:1 peak:1 sensitivity:1 probabilistic:11 physic:1 together:2 containing:1 choose:1 corner:6 conf:1 expert:2 abdul:1 derivative:1 li:1 account:1 de:10 singleton:1 coding:42 summarized:1 includes:1 bold:2 int:1 combinatorics:1 later:1 view:1 closed:2 competitive:3 disadvantageous:1 maintains:1 accuracy:7 convolutional:3 phoneme:1 who:1 variance:1 likewise:1 identify:1 mtmkl:3 bayesian:3 accurately:1 iid:1 none:1 carlo:2 lu:1 comp:1 ren:1 lengyel:1 cation:3 converged:1 anns:1 ed:2 energy:3 mohamed:2 naturally:1 associated:2 proof:2 hereby:3 spie:1 sampled:1 sap:1 richly:1 popular:1 treatment:3 wh:1 knowledge:2 dimensionality:10 sophisticated:1 higher:5 dt:2 follow:3 modal:2 improved:1 response:1 formulation:1 evaluated:1 though:1 strongly:1 furthermore:5 fiser:1 correlation:2 until:2 eqn:1 horizontal:3 expressive:1 nonlinear:1 overlapping:1 mode:11 olshausen:4 name:1 effect:3 verify:1 contain:2 true:1 functioning:1 hence:1 analytically:2 assigned:1 ringach:1 eg:1 during:2 ambiguous:1 generalized:1 demonstrate:2 eggert:2 l1:1 image:19 variational:11 hallmark:1 ef:12 funding:1 specialized:2 functional:5 spiking:1 empirically:1 nh:1 volume:2 interpretation:2 functionally:2 schnass:1 gibbs:8 paisley:1 rd:5 approx:1 grid:1 similarly:1 luecke:1 cortex:3 whitening:1 etc:1 berkes:2 posterior:47 recent:2 irrelevant:1 discard:1 initiation:1 binary:5 der:1 exploited:1 devise:1 captured:1 additional:1 dai:1 speci:2 parallelized:1 employed:1 maximize:1 redundant:1 signal:1 multiple:10 desirable:2 sound:2 full:3 sbns:2 infer:1 technical:2 faster:1 believed:1 bach:1 impact:1 jost:1 scalable:6 vision:3 expectation:7 arxiv:1 iteration:3 represent:1 kernel:1 suppl:1 synaptically:1 cell:4 whereas:1 addressed:1 interval:1 singular:1 source:2 crucial:3 unlike:3 veri:2 subject:1 ciently:3 intermediate:2 bengio:1 independence:2 computable:1 bottleneck:1 bridging:1 penalty:1 peter:1 speech:1 cause:2 deep:3 generally:1 preselection:6 ten:2 concentrated:3 processed:1 reduced:1 generate:2 percentage:1 coates:1 e1002250:1 delta:1 per:6 diverse:1 drawn:3 mono:1 lesage:1 neither:2 intermediately:1 deweese:1 henniges:1 v1:11 fraction:1 sum:1 year:1 run:5 inverse:1 letter:1 uncertainty:2 baraniuk:1 almost:3 patch:10 separation:1 draw:1 sallans:1 scaling:2 capturing:2 followed:2 courville:1 fold:1 activity:2 scene:1 generates:1 simulate:1 argument:2 span:1 mensch:1 orban:1 performing:1 relatively:2 ned:5 structured:1 according:1 gredilla:1 poor:1 smaller:1 em:11 increasingly:1 wi:3 sheikh:6 restricted:1 invariant:2 computationally:1 equation:1 previously:8 remains:1 puertas:1 thirion:1 needed:1 tractable:3 serf:1 rewritten:1 multiplied:1 apply:8 observe:1 hierarchical:3 away:1 salimans:1 sterne:1 alternative:3 altogether:1 customary:2 assumes:1 denotes:2 include:2 remaining:1 top:1 graphical:1 maintaining:1 exploit:4 giving:1 ghahramani:2 especially:1 society:1 question:1 added:1 spike:39 strategy:1 primary:3 receptive:3 parametric:1 unclear:1 exhibit:1 visiting:1 subspace:21 link:1 thank:1 berlin:2 exarchakis:2 majority:2 valpola:1 exc:1 trivial:2 assuming:1 saboor:1 code:1 pointwise:1 index:4 illustration:3 ratio:1 cont:1 modeled:1 innovation:1 executed:1 potentially:2 debate:1 negative:1 rise:1 implementation:1 boltzmann:1 spikeand:1 vertical:2 observation:4 neuron:1 markov:3 benchmark:3 acknowledge:1 snippet:1 anti:1 curved:5 truncated:11 t:2 january:1 psnrs:1 variability:1 precise:1 hinton:1 banana:1 community:2 namely:1 required:2 bene:2 connection:1 acoustic:2 identi:2 learned:3 barcelona:1 kingma:1 nip:8 macaque:1 bar:6 suggested:2 sparsity:5 including:5 royal:1 shifting:1 suitable:1 natural:5 indicator:1 solvable:1 advanced:1 ne:4 nding:1 axis:1 extract:1 faced:1 prior:7 heller:1 acknowledgement:1 zh:5 multiplication:1 ows:1 expect:1 highlight:1 mixed:1 suf:2 ger:1 analogy:1 versus:1 vandergheynst:1 h2:2 foundation:1 degree:1 lters:2 consistent:2 principle:1 intractability:1 translation:1 row:2 course:1 repeat:1 free:3 bern:2 truncation:1 explaining:1 taking:1 emerge:2 sparse:50 distributed:1 van:3 overcome:2 dimension:11 cortical:1 evaluating:1 valid:1 avoids:1 cumulative:1 rich:1 sensory:5 author:2 commonly:1 collection:1 made:1 avoided:1 preprocessing:1 nguyen:1 far:4 welling:1 approximate:7 citation:1 patel:1 zylberberg:1 reveals:1 mairal:2 assumed:2 continuous:7 latent:38 additionally:1 promising:2 nature:1 init:1 symmetry:1 alg:6 complex:14 domain:1 did:2 main:1 linearly:1 rh:4 whole:1 noise:7 fig:8 referred:1 cient:10 slow:1 xh:1 ciency:1 candidate:1 clamped:1 house:2 jmlr:3 s5c:23 justi:1 wavelet:2 showing:1 exible:1 evidence:1 normalizing:1 intrinsic:2 intractable:1 quantization:1 importance:2 kr:1 supplement:2 texture:1 chen:1 gap:1 suited:2 rg:1 entropy:1 simply:1 likely:1 visual:6 horizontally:1 ordered:1 applies:4 springer:1 cite:1 truth:2 corresponds:1 dh:1 cdf:1 extracted:1 abbreviation:1 conditional:3 identity:2 goal:1 experimentally:1 hard:2 included:2 typical:1 except:1 operates:1 sampler:7 denoising:5 uol:1 total:1 svd:4 superimposing:1 shannon:1 select:15 formally:1 internal:1 relevance:2 avoiding:1 hateren:2 ongoing:1 dept:1 mcmc:2 shelton:3 |
5,832 | 6,277 | Refined Lower Bounds for Adversarial Bandits
S?bastien Gerchinovitz
Institut de Math?matiques de Toulouse
Universit? Toulouse 3 Paul Sabatier
Toulouse, 31062, France
[email protected]
Tor Lattimore
Department of Computing Science
University of Alberta
Edmonton, Canada
[email protected]
Abstract
We provide new lower bounds on the regret that must be suffered by adversarial
bandit algorithms. The new results show that recent upper bounds that either (a)
hold with high-probability or (b) depend on the total loss of the best arm or (c)
depend on the quadratic variation of the losses, are close to tight. Besides this we
prove two impossibility results. First, the existence of a single arm that is optimal
in every round cannot improve the regret in the worst case. Second, the regret
cannot scale with the effective range of the losses. In contrast, both results are
possible in the full-information setting.
1
Introduction
We consider the standard K-armed adversarial bandit problem, which is a game played over T
rounds between a learner and an adversary. In every round t ? {1, . . . , T } the learner chooses a
probability distribution pt = (pi,t )16i6K over {1, . . . , K}. The adversary then chooses a loss vector
`t = (`i,t )16i6K ? [0, 1]K , which may depend on pt . Finally the learner samples an action from pt
denoted by It ? {1, . . . , K} and observes her own loss `It ,t . The learner would like to minimise her
regret, which is the difference between cumulative loss suffered and the loss suffered by the optimal
action in hindsight:
T
T
X
X
RT (`1:T ) =
`It ,t ? min
`i,t ,
t=1
16i6K
t=1
TK
where `1:T ? [0, 1]
is the sequence of losses
p chosen by the adversary. A famous strategy is called
Exp3, which satisfies E[RT (`1:T )] = O( KT log(K))) where the expectation is taken over the
randomness in the algorithm and the choices of the adversary [Auer et al., 2002]. There is also a
lower bound showing
? that for every learner there is an adversary for which the expected regret is
E[RT (`1:T )] = ?( KT ) [Auer et al., 1995]. If the losses are chosen ahead of time, then the?adversary is called oblivious, and in this case there exists a learner for which E[RT (`1:T )] = O( KT )
[Audibert and Bubeck, 2009]. One might think that this is the end of the story, but it is not so. While
the worst-case expected regret is one quantity of interest, there are many situations where a refined
regret guarantee is more informative. Recent research on adversarial bandits has primarily focussed
on these issues, especially the questions of obtaining regret guarantees that hold with high probability
as well as stronger guarantees when the losses are ?nice? in some sense. While there are now a wide
range of strategies with upper bounds that depend on various quantities, the literature is missing lower
bounds for many cases, some of which we now provide.
We focus on three classes of lower bound, which are described in detail below. The first addresses the
optimal regret achievable with high probability, where we show there is little room for improvement
over existing strategies. Our other results concern lower bounds that depend on some kind of regularity
in the losses (?nice? data). Specifically we prove lower bounds that replace T in the regret bound
with the loss of the best action (called first-order bounds) and also with the quadratic variation of the
losses (called second-order bounds).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
High-probability bounds Existing strategies Exp3.P [Auer et al., 2002] and Exp3-IX [Neu, 2015a]
are tuned with a confidence parameter ? ? (0, 1) and satisfy, for all `1:T ? [0, 1]KT ,
p
P RT (`1:T ) > c KT log(K/?) 6 ?
(1)
for some universal constant c > 0. An alternative tuning of Exp-IX or Exp3.P [Bubeck and CesaBianchi, 2012] leads to a single algorithm for which, for all `1:T ? [0, 1]KT ,
!!
p
?
log(1/?)
6 ?.
(2)
?? ? (0, 1)
P RT (`1:T ) > c KT
log(K) + p
log(K)
The difference is that in (1) the algorithm depends on ? while in (2) it does not. The cost of not
knowing ? is that the log(1/?) moves outside the square root. In Section 2 we prove two lower
bounds showing that there is little room for improvement in either (1) or (2).
?
First-order bounds An improvement over the worst-case regret bound of O( T K) is the
so-called improvement for small losses. Specifically, there exist strategies (eg., FPL-TRIX by Neu
[2015b] with earlier results by Stoltz [2005], Allenberg et al. [2006], Rakhlin and Sridharan [2013])
such that for all `1:T ? [0, 1]KT
q
T
X
?
E[RT (`1:T )] 6 O
LT K log(K) + K log(KT ) , with L?T = min
`i,t ,
(3)
16i6K
t=1
where the expectation is with respect to ?
the internal randomisation of the algorithm (the losses are
fixed). This result improves on the O( KT ) bounds since L?T 6 T is always guaranteed and
sometimes L?T is much smaller than T . In order to evaluate the optimality of this bound, we first
rewrite it in terms of the small-loss balls B?,T defined for all ? ? [0, 1] and T > 1 by
L?
B?,T , `1:T ? [0, 1]KT : T 6 ? .
(4)
T
Corollary 1. The first-order regret bound (3) of Neu [2015b] is equivalent to:
p
?T K log(K) + K log(KT ) .
?? ? [0, 1],
sup E[RT (`1:T )] 6 O
`1:T ?B?,T
The
? proof is straightforward. Our main contribution in Section 3 is a lower bound of the order of
?T K for all ? ? ?(log(T )/T ). This minimax lower bound shows that we cannot hope for a better
bound than (3) (up to log factors) if we only know the value of L?T .
Second-order bounds Another type of improved regret bound was derived by Hazan and Kale
[2011b] and involves a second-order quantity called the quadratic variation:
T
X
TK
2
QT =
k`t ? ?T k2 6
,
(5)
4
t=1
P
T
where ?T = T1 t=1 `t is the mean of all loss vectors. (In other words, QT /T is the sum of the
empirical variances of all the K arms). Hazan and Kale [2011b] addressed the general online linear
optimisation setting. In the particular case of adversarial K-armed bandits with an oblivious adversary
(as is the case here), they showed that there exists an efficient algorithm such that for some absolute
constant c > 0 and for all T > 2
p
?`1:T ? [0, 1]KT ,
E[RT (`1:T )] 6 c K 2 QT log T + K 1.5 log2 T + K 2.5 log T . (6)
As before we can rewrite the regret bound (6) in terms of the small-variation balls V?,T defined for
all ? ? [0, 1/4] and T > 1 by
QT
V?,T , `1:T ? [0, 1]KT :
6? .
(7)
TK
Corollary 2. The second-order regret bound (6) of Hazan and Kale [2011b] is equivalent to:
p
?? ? [0, 1/4],
sup E[RT (`1:T )] 6 c K 2 ?T K log T + K 3/2 log2 T + K 5/2 log T .
`1:T ?V?,T
2
The proof is straightforward because the losses are deterministic and
? fixed in advance by an oblivious
adversary. In Section 4 we provide a lower bound of order ?T K that holds whenever ? =
?(log(T )/T ). This minimax
? lower bound shows that we cannot hope for a bound better than (7) by
more than a factor of K 2 log T if we only know the value of QT . Closing the gap is left as an open
question.
Two impossibility results in the bandit setting We also show in Section 4 that, in contrast to
the full-information setting, regret bounds involving the cumulative variance of the algorithm as in
[Cesa-Bianchi et al., 2007] cannot be obtained in the bandit setting. More precisely, we prove that
two consequences that hold true in the full-information case, namely: (i) a regret bound proportional
to the effective range of the losses and (ii) a bounded regret if one arm performs best at all rounds,
must fail in the worst case for every bandit algorithm.
Additional notation and key tools Before the theorems we develop some additional notation and
describe the generic ideas in the proofs. For 1 6 i 6 K let Ni (t) be the number of times action i
has been chosen after round t. All our lower bounds are derived by analysing the regret incurred
by strategies when facing randomised adversaries that choose the losses for all actions from the
same joint distribution in every round (sometimes independently for each action and sometimes not).
Ber(?) denotes the Bernoulli distribution with parameter ? ? [0, 1]. If P and Q are measures on the
same probability space, then KL(P, Q) is the KL-divergence between them. For a < b we define
clip[a,b] (x) = min {b, max {a, x}} and for x, y ? R we let x ? y = max{x, y}. Our main tools
throughout the analysis are the following information-theoretic lemmas. The first bounds the KL
divergence between the laws of the observed losses/actions for two distributions on the losses.
Lemma 1. Fix a randomised bandit algorithm and two probability distributions Q1 and Q2
on [0, 1]K . Assume the loss vectors `1 , . . . , `T ? [0, 1]K are drawn i.i.d. from either Q1 or Q2 ,
and denote by Qj the joint probability distribution on all sources of randomness when Qj is used
(formally, Qj = Pint ? (Q?T
j ), where Pint is the probability distribution used by the algorithm for
its internal randomisation). Let t > 1. Denote by ht = (Is , `Is ,s )16s6t?1 the history available
(h ,I )
at the beginning of round t, by Qj t t the law of (ht , It ) under Qj , and by Qj,i the ith marginal
distribution of Qj . Then,
K
X
(h ,I )
(h ,I )
KL Q1 t t , Q2 t t =
EQ1 Ni (t ? 1) KL Q1,i , Q2,i .
i=1
Results of roughly this form are well known and the proof follows immediately from the chain rule
for the relative entropy and the independence of the loss vectors across time (see [Auer et al., 2002]
or the supplementary material). One difference is that the losses need not be independent across the
arms, which we heavily exploit in our proofs by using correlated losses. The second key lemma is an
alternative to Pinsker?s inequality that proves useful when the Kullback-Leibler divergence is larger
than 2. It has previously been used for bandit lower bounds (in the stochastic setting) by Bubeck et al.
[2013].
Lemma 2 (Lemma 2.6 in Tsybakov 2008). Let P and Q be two probability distributions on the
same measurable space. Then, for every measurable subset A (whose complement we denote by Ac ),
1
P (A) + Q(Ac ) > exp ? KL(P, Q) .
2
2
Zero-Order High Probability Lower Bounds
We prove two new high-probability lower bounds on thep
regret of any bandit algorithm. The first
shows that no strategy can enjoy smaller regret than ?( KT log(1/?)) with probability at least
1 ? ?. Upper bounds of this form have been shown for various algorithms including Exp3.P [Auer
et al., 2002] and Exp3-IX [Neu, 2015a]. Although this result is not very surprising, we are not aware
of any existing work on this problem and the proof is less straightforward than one might expect.
An added benefit of our result is that the loss sequences producing large regret have two special
properties. First,p
the optimal arm is the same in every round and second the range of the losses in
each round is O( K log(1/?)/T ). These properties will be useful in subsequent analysis.
?
In the second lower bound we show that any algorithm for
? which E[RT (`1:T )] = O( KT ) must
necessarily suffer a high probability regret of at least ?( KT log(1/?)) for some sequence `1:T .
3
The important difference relative to the previous result is that strategies with log(1/?) appearing
inside the square root depend on a specific value of ?, which must be known in advance.
Theorem 1. Suppose K > 2 and ? ? (0, 1/4) and T > 32(K ? 1) log(2/?), then there exists a
sequence of losses `1:T ? [0, 1]KT such that
1p
P RT (`1:T ) >
(K ? 1)T log(1/(4?)) > ?/2 ,
27
where the probability is taken with respect to the randomness in the algorithm. Furthermore `1:T
can be chosen in such ap
way that there exists an i such that for all t it holds that `i,t = minj `j,t and
?
maxj,k {`j,t ? `k,t } 6 (K ? 1) log(1/(4?))/T /(4 log 2).
strategy and constant C > 0 such that
Theorem 2. Suppose K > 2, T > 1, and there exists a p
KT
for
any
`
?
[0,
1]
it
holds
that
E[R
(`
)]
6
C
(K ? 1)T . Let ? ? (0, 1/4) satisfy
1:T
T 1:T
p
(K ? 1)/T log(1/(4?)) 6 C and T > 32 log(2/?). Then there exists `1:T ? [0, 1]KT for which
!
p
(K ? 1)T log(1/(4?))
P RT (`1:T ) >
> ?/2 ,
203C
where the probability is taken with respect to the randomness in the algorithm.
Corollary 3. If p ? (0, 1) and C > 0, then there does not exist a strategy such
KT
that
and ? ? (0, 1) the regret is bounded by
for all T , pK, `1:T ? [0, 1]
p
P RT (`1:T ) > C (K ? 1)T log (1/?) 6 ?.
The corollary follows easily by integrating the assumed high-probability bound and applying Theorem 2 for sufficiently large T and small ?. The proof may be found in the supplementary material.
Proof of Theorems 1 and 2 Both proofs rely on a carefully selected choice of correlated stochastic
losses described below. Let Z1 , Z2 , . . . , ZT be a sequence of i.i.d. Gaussian random variables with
mean 1/2 and variance ? 2 = 1/(32 log(2)). Let ? ? [0, 1/30] be a constant that will be chosen
differently in each proof and define K random loss sequences `11:T , . . . , `K
1:T where
?
?
if i = 1
?clip[0,1] (Zt ? ?)
j
`i,t = clip[0,1] (Zt ? 2?) if i = j 6= 1
?
?clip
otherwise .
[0,1] (Zt )
For 1 6 j 6 K let Qj be the measure on `1:T ? [0, 1]KT and I1 , . . . , IT when `i,t = `ji,t for all
1 6 i 6 K and 1 6 t 6 T . Informally, Qj is the measure on the sequence of loss vectors and actions
when the learner interacts with the losses sampled from the jth environment defined above.
Lemma 3. Let ? ? (0, 1) and suppose ? 6 1/30 and T > 32 log(2/?). Then
Qi RT (`i1:T ) > ?T /4 > Qi (Ni (T ) 6 T /2) ? ?/2 and EQi [RT (`i1:T )] > 7?EQi [T ? Ni (T )]/8.
The proof follows by substituting the definition of the losses and applying Azuma?s inequality to
show that clipping does not occur too often. See the supplementary material for details.
Proof
p of Theorem 1. First we choose the value of ? that determines the gaps in the losses by ? =
? 2 (K ? 1) log(1/(4?))/(2T ) 6 1/30. By the pigeonhole principle there exists an i > 1 for
which EQ1 [Ni (T )] 6 T /(K ? 1). Therefore by Lemmas 2 and 1, and the fact that the KL divergence
between clipped Gaussian distributions is always smaller than without clipping,
1
(h ,I )
(h ,I )
Q1 (N1 (T ) 6 T /2) + Qi (N1 (T ) > T /2) > exp ? KL Q1 T T , Qi T T
2
1
EQ1 [Ni (T )](2?)2
1
2T ?2
> exp ?
> exp ? 2
= 2? .
2
2? 2
2
? (K ? 1)
But by Lemma 3
max Qk RT (`k1:T ) > T ?/4 > max {Q1 (N1 (T ) 6 T /2) , Qi (Ni (T ) 6 T /2)} ? ?/2
k?{1,i}
>
1
(Q1 (N1 (T ) 6 T /2) + Qi (N1 (T ) > T /2)) ? ?/2 > ?/2 .
2
4
Therefore there exists an i ? {1, . . . , K} such that
s
!
? 2 T (K ? 1)
1
i
Qi RT (`1:T ) >
log
= Qi RT (`i1:T ) > T ?/4 > ?/2 .
32
4?
The result is completed by p
substituting the value of ? 2 = 1/(32 log(2)) and by noting that
?
maxj,k {`j,t ? `k,t } 6 2? 6 (K ? 1) log(1/(4?))/T /(4 log 2) Qi -almost surely.
q
1
7? 2
K?1
Proof of Theorem 2. By the assumption on ? we have ? = 16C
T log 4? 6 1/30. Suppose
for all i > 1 that
1
?2
log
EQ1 [Ni (T )] >
.
(8)
2
2?
4?
Then by the assumption in the theorem statement and the second part of Lemma 3 we have
"K
#
X
p
p
7?
7? 2 (K ? 1)
1
1
C (K ? 1)T > EQ1 [RT (`1:T )] >
EQ1
Ni (T ) >
log
= C (K ? 1)T ,
8
16?
4?
i=2
which is a contradiction. Therefore there exists an i > 1 for which Eq. (8) does not hold. Then by the
same argument as the previous proof it follows that
7? 2 p
1
max Qk RT (`k1:T ) >
(K ? 1)T log
= max Qk RT (`k1:T ) > T ?/4 > ?/2 .
4 ? 16C
4?
k?{1,i}
k?{1,i}
The result is completed by substituting the value of ? 2 = 1/(32 log(2)).
Remark 1. It is possible to derive similar high-probability regret bounds with non-correlated losses.
However the correlation makes the results cleaner (we do not need an additional concentration
argument to locate the optimal arm) and it is key to derive Corollaries 4 and 5 in Section 4.
3
First-Order Lower Bound
First-order upper bounds provide improvement over minimax bounds when the loss of the optimal
action is small. Recall from Corollary 1 that first-order bounds can be rewritten in terms
pof the
small-loss balls B?,T defined in (4). Theorem 3 below provides a new lower bound of order L?T K,
which matches the best existing upper bounds up to logarithmic factors. As is standard for minimax
results this does not imply a lower bound on every loss sequence `1:T . Instead it shows that we cannot
hope for a better bound if we only know the value of L?T .
Theorem 3. Let K > 2, T > K ? 118, and ? ? [(c log(32T ) ? (K/2))/T,?1/2], where c = 64/9.
Then for any randomised bandit algorithm sup`1:T ?B?,T E[RT (`1:T )] > ?T K/27, where the
expectation is taken with respect to the internal randomisation of the algorithm.
Our proof is inspired by that of Auer et al. [2002, Theorem 5.1]. The key difference is that we take
Bernoulli distributions with parameter close to ? instead of 1/2. This way
cumulative loss
? the best p
L?T is ensured to be concentrated around ?T , and the regret lower bound ?T K ? ?(1 ? ?)T K
can be seen to involve the variance ?(1 ? ?)T of the binomial distribution with parameters ? and T .
First we state the stochastic construction of the losses and prove a general lemma that allows us
to prove Theorem 3 and will also be useful in Section 4 to a derive a lower bound in terms of the
quadratic variation. Let ? ? [0, 1 ? ?] be fixed and define K probability distributions (Qj )K
j=1 on
[0, 1]KT such that under Qj the following hold:
? All random losses `i,t for 1 6 i 6 K and 1 6 t 6 T are independent.
? `i,t is sampled from a Bernoulli distribution with parameter ? + ? if i 6= j, or with parameter
? if i = j.
Lemma 4. Let ? ? (0, 1), K > 2, and T > K/(4(1??)). Consider the probability distributions Qj
p
PK
1
?
on [0, 1]KT defined above with ? = (1/2) ?(1 ?
p?)K/T , and set Q = K j=1 Qj . Then for
any randomised bandit algorithm E[RT (`1:T )] > ?(1 ? ?)T K/8, where the expectation is with
respect to both the internal randomisation of the algorithm and the random loss sequence `1:T which
?
is drawn from Q.
5
The assumption T > K/(4(1 ? ?)) above ensures that ? 6 1 ? ?, so that the Qj are well defined.
Proof of Lemma 4. We lower bound the regret by the pseudo-regret for each distribution Qj :
" T
#
" T
#
" T
#
T
X
X
X
X
`It ,t ? min
`i,t > EQj
`It ,t ? min EQj
`i,t
EQj
16i6K
t=1
t=1
t=1
T
X
16i6K
1
=
EQj ? + ? ? ?1{It =j} ? T ? = T ? 1 ?
T
t=1
T
X
t=1
!
Qj (It = j)
,
(9)
t=1
where the first equality follows because EQj [`It ,t ] = EQj [EQj [`It ,t |`1:t?1 , It ]] = EQj [? + ? ?
?1{It =j} ] since under Qj , the conditional distribution of `t given (`1:t?1 , It ) is simply ?K
i=1 B(? +
? ? ?1{i=j} ). To bound (9) from below, note that by Pinsker?s inequality we have for all t ?
{1, . . . , T } and j ? {1, . . . , K}, Qj (It = j) 6 Q0 (It = j) + (KL(QI0t , QIjt )/2)1/2 , where Q0 =
Ber(? + ?)?KT is the joint probability distribution that makes all the `i,t i.i.d. Ber(? + ?), and
QI0t and QIjt denote the laws of It under Q0 and Qj respectively. Plugging the last inequality above
into (9), averaging over j = 1, . . . , K and using the concavity of the square root yields
v
?
?
" T
#
u
T
K
T
X
X
X
u 1 X
1
1
I
I
`It ,t ? min
`i,t > T ? ?1 ?
KL Q0t , Qjt ? , (10)
EQ?
?t
16i6K
K
2T t=1 K j=1
t=1
t=1
? = 1 PK Qj . The rest of the proof is devoted to upper-bounding
where we recall that Q
j=1
K
KL(QI0t , QIjt ). Denote by ht = (Is , `Is ,s )16s6t?1 the history available at the beginning of round t.
From Lemma 1
(h ,I )
(h ,I )
KL QI0t , QIjt 6 KL Q0 t t , Qj t t = EQ0 Nj (t ? 1) KL B(? + ?), B(?)
6 EQ0 Nj (t ? 1)
?2
,
?(1 ? ?)
(11)
where the last inequality follows by upper bounding the KL divergence by the ?2 divergence (see the
supplementary material). Averaging (11) over j ? {1, . . . , K} and t ? {1, . . . , T } and noting that
PT
2
t=1 (t ? 1) 6 T /2 we get
T
K
T
1 X (t ? 1)?2
1X 1 X
T ?2
KL QI0t , QIjt 6
6
.
T t=1 K j=1
T t=1 K?(1 ? ?)
2K?(1 ? ?)
p
Plugging the above inequality into (10) and using the definition of ? = (1/2) ?(1 ? ?)K/T yields
" T
#
T
X
X
1
1
1p
EQ?
`It ,t ? min
`i,t > T ? 1 ?
?(1 ? ?)T K .
?
>
16i6K
K
4
8
t=1
t=1
KT
Proof of Theorem 3. We show
such that L?T 6 ?T
? that there exists a loss sequence `1:T ? [0, 1]
and E[RT (`1:T )] > (1/27) ?T K. Lemma 4 above provides such kind of lower bound, but without
the guarantee on L?T . For this purpose we will use Lemma 4 with a smaller value of ? (namely, ?/2)
and combine it with Bernstein?s inequality to prove that L?T 6 T ? with high probability.
Part 1: Applying Lemma 4 with ?/2 (note that T > K > K/(4(1 ? ?/2)) by assumption on T )
and noting that maxj EQj [RT (`1:T )] > EQ? [RT (`1:T )] we get that for some j ? {1, . . . , K} the
p
probability distribution Qj defined with ? = (1/2) (?/2)(1 ? ?/2)K/T satisfies
r
1 ?
?
1?
EQj [RT (`1:T )] >
1?
TK >
6?T K
(12)
8 2
2
32
since ? 6 1/2 by assumption.
Part 2: Next we prove that
Qj (L?T > T ?) 6
1
.
32T
6
(13)
PT
To this end, first note that L?T 6 t=1 `j,t . Second, note that under Qj , the `j,t , t > 1, are i.i.d.
Ber(?/2). We can thus use Bernstein?s inequality: applying Theorem 2.10 (and a remark on p.38)
of Boucheron et al. [2013] with Xt = `j,t ? ?/2 6 1 = b, with v = T (?/2)(1 ? ?/2), and with
c = b/3 = 1/3), we get that, for all ? ? (0, 1), with Qj -probability at least 1 ? ?,
r
T
X
?
1 1
T?
?
1
?
1?
log + log
LT 6
`j,t 6
+ 2T
2
2
2
?
3
?
t=1
r
T?
1
1
T? T?
6
+ 1+
T ? log 6
+
= T? ,
(14)
2
3
?
2
2
where the second last inequality is true whenever T ? > log(1/?) and that last is true whenever
T ? > (8/3)2 log(1/?) = c log(1/?). By assumption on ?, these two conditions are satisfied for
? = 1/(32T ), which concludes the proof of (13).
Conclusion: We show by contradiction that there exists a loss sequence `1:T ? [0, 1]KT such that
L?T 6 ?T and
1?
E[RT (`1:T )] >
6?T K ,
(15)
64
where the expectation is with respect to the internal randomisation of the algorithm. Imagine for
KT
a second that (15) were false for every loss sequence `1:T
satisfying L?T 6 ?T . Then
? ? [0, 1]
we would have 1{L?T 6?T } EQj [RT (`1:T )|`1:T ] 6 (1/64) 6?T K almost surely (since the internal
source of randomness of the bandit algorithm is independent of `1:T ). Therefore by the tower rule for
the first expectation on the r.h.s. below, we would get
h
i
h
i
EQj [RT (`1:T )] = EQj RT (`1:T )1{L?T 6?T } + EQj RT (`1:T )1{L?T >?T }
1?
1?
1
1?
6
6?T K + T ? Qj (L?T > T ?) 6
6?T K +
<
6?T K (16)
64
64
32
32
?
where (16) follows from (13) and by noting that 1/32 < (1/64) 6?T K since ? > K/(2T ) >
4/(6T ) > 4/(6T K). Comparing (16) and (12) we get a contradiction, which proves that there exists
KT
a loss sequence
satisfying both L?T 6 ?T and (15). We conclude the proof by
? `1:T ? [0, 1]
noting
that
6/64
>
1/27.
Finally,
the condition T > K ? 118 is sufficient to make the interval
(c log(32T ) ? (K/2))/T, 12 non empty.
4
Second-Order Lower Bounds
We start by giving a lower bound on the regret in terms of the quadratic variation that is close to
existing upper bounds except in the dependence on the number of arms. Afterwards we prove that
bandit strategies cannot adapt to losses that lie in a small range or the existence of an action that is
always optimal.
?
Lower bound in terms of quadratic variation We prove a lower bound of ?( ?T K) over any
small-variation ball V?,T (as defined by (7)) for all ? = ?(log(T )/T ). Thisp
minimax lower bound
matches the upper bound of Corollary 2 up to a multiplicative factor of K 2 log(T ). Closing this
gap is left as an open question, but we conjecture that the upper bound is loose (see also the COLT
open problem by Hazan and Kale [2011a]).
Theorem 4. Let K >? 2, T > (32K) ? 601, and ? ? [(2c1 log(T ) ? 8K)/T, 1/4],
where c1 = (4/9)2 (3 5 ?+ 1)2 6 12. Then for any randomised bandit algorithm,
sup`1:T ?V?,T E[RT (`1:T )] > ?T K/25, where the expectation is taken with respect to the internal
randomisation of the algorithm.
The proof is very similar to that of Theorem 3; it also follows from Lemma 4 and Bernstein?s
inequality. It is postponed to the supplementary material.
Impossibility results In the full-information setting (where the entire loss vector is observed after
each round) Cesa-Bianchi et al. [2007, Theorem 6] designed a carefully tuned exponential weighting
algorithm for which the regret depends on the variation of the algorithm and the range of the losses:
p
?`1:T ? RKT ,
E[RT (`1:T )] 6 4 VT log K + 4ET log K + 6ET ,
(17)
7
where the expectation is taken with respect to the internal randomisation of the algorithm
and EP
T = max16t6T max16i,j6K |`i,t ? `j,t | denotes the effective range of the losses and
T
VT = t=1 VarIt ?pt (`It ,t ) denotes the cumulative variance of the algorithm (in each round t the
expert?s action It is drawn at random from the weight vector pt ). The bound in (17) is not closed-form
because VT depends on the algorithm, but has several interesting consequences:
1. If for all t the losses `i,t lie in an unknown interval [at , at + ?] with a small width ? > 0, then
VarIt ?pt (`It ,t ) 6 ?2 /4, so that VT 6 T ?2 /4. Hence
p
E[RT (`1:T )] 6 2? T log K + 4? log K + 6? .
Therefore, though the algorithm by Cesa-Bianchi et al. [2007, Section 4.2] does not use the prior
knowledge of at or ?, it is able to incur a regret that scales linearly in the effective range ?.
2. If all the losses `i,t are nonnegative, then by Corollary 3 of [Cesa-Bianchi et al., 2007] the
second-order bound (17) implies the first-order bound
s
L?T
?
E[RT (`1:T )] 6 4 LT MT ?
log K + 39MT max{1, log K} ,
(18)
T
where MT = max16t6T max16i6K `i,t .
3. If there exists an arm i? that is optimal at every round t (i.e., `i? ,t = mini `i,t for all t > 1), then
any translation-invariant algorithm with regret guarantees as in (18) above suffers a bounded
regret. This is the case for the fully automatic algorithm of Cesa-Bianchi et al. [2007, Theorem 6]
mentioned above. Then by the translation invariance of the algorithm all losses `i,t appearing in
the regret bound can be replaced with the translated losses `i,t ? `i? ,t > 0, so that a bound of
the same form as (18) implies a regret bound of O(log K).
4. Assume that the loss vectors `t are i.i.d. with a unique optimal arm in expectation (i.e., there
exists i? such that E[`i? ,1 ] < E[`i,1 ] for all i 6= i? ). Then using the Hoeffding-Azuma inequality
we can show that the algorithm of Cesa-Bianchi et al. [2007, Section 4.2] has with high
probability a bounded cumulative variance VT , and therefore (by (17)) incurs a bounded regret,
in the same spirit as in de Rooij et al. [2014], Gaillard et al. [2014].
We already know that point 2 has a counterpart in the bandit setting. If one is prepared to ignore
logarithmic terms, then point 4 also has an analogue in the bandit setting due to the existence
of logarithmic regret guarantees for stochastic bandits [Lai and Robbins, 1985]. The following
corollaries show that in the bandit setting it is not possible to design algorithms to exploit the range
of the losses or the existence of an arm
? that is always optimal. We use Theorem 1 as a general tool
but the bounds can be improved to T K/30 by analysing the expected regret directly (similar to
Lemma 4).
p
> 0.22 (K ? 1)/T . Then for any ranCorollary 4. Let K > 2, T > 32(K ? 1) log(14) and ? p
domised bandit algorithm, sup`1 ,...,`T ?C? E[RT (`1:T )] > T (K ? 1)/504, where the expectation
is with respect to the randomness in the algorithm, and C? , x ? [0, 1]K : maxi,j |xi ? xj | 6 ? .
Corollary 5. Let K > 2 and T > 32(K ? 1) log(14). Then, for any randomised bandit algorithm,
?
there is a loss sequence `1:T ? [0, 1]KT such that there exists an arm
p i that is optimal at every
round t (i.e., `i? ,t = mini `i,t for all t > 1), but E[RT (`1:T )] > T (K ? 1)/504, where the
expectation is with respect to the randomness in the algorithm.
Proof of Corollaries 4 and 5. Both results follow
p from Theorem 1 by choosing ? = 0.15. Therefore
there exists an `1:T such that P{RT (`1:T ) > (K ? 1)T
p log(1/(4 ? 0.15)/27} > 0.15/2, which
implies (sincepRT (`1:T ) > 0 here) that E[R?
(`
)]
>
(K ? 1)T /504. Finally note that `1:T ?
T 1:T
C? since ? > (K ? 1) log(1/(4?))/T /(4 log 2) and there exists an i such that `i,t 6 `j,t for all
j and t.
Acknowledgments
The authors would like to thank Aur?lien Garivier and ?milie Kaufmann for insightful discussions. This work was partially supported by the CIMI (Centre International de Math?matiques et
d?Informatique) Excellence program. The authors acknowledge the support of the French Agence
Nationale de la Recherche (ANR), under grants ANR-13-BS01-0005 (project SPADRO) and ANR13-CORD-0020 (project ALICIA).
8
References
C. Allenberg, P. Auer, L. Gy?rfi, and G. Ottucs?k. Hannan consistency in on-line learning in case of
unbounded losses under partial monitoring. In Proceedings of ALT?2006, pages 229?243. Springer,
2006.
J. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings
of Conference on Learning Theory (COLT), pages 217?226, 2009.
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. Gambling in a rigged casino: The adversarial
multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th
Annual Symposium on, pages 322?331. IEEE, 1995.
P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire. The nonstochastic multi-armed bandit
problem. SIAM J. Comput., 32(1):48?77, 2002.
S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities: a nonasymptotic theory of
independence. Oxford University Press, 2013.
S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit
problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
S. Bubeck, V. Perchet, and P. Rigollet. Bounded regret in stochastic multi-armed bandits. In
Proceedings of The 26th Conference on Learning Theory, pages 122?134, 2013.
N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with
expert advice. Mach. Learn., 66(2/3):321?352, 2007.
S. de Rooij, T. van Erven, P. D. Gr?nwald, and W. M. Koolen. Follow the leader if you can, hedge if
you must. J. Mach. Learn. Res., 15(Apr):1281?1316, 2014.
P. Gaillard, G. Stoltz, and T. van Erven. A second-order bound with excess losses. In Proceedings of
the 27th Conference on Learning Theory (COLT?14), 2014.
E. Hazan and S. Kale. A simple multi-armed bandit algorithm with optimal variation-bounded regret.
In Proceedings of the 24th Conference on Learning Theory, pages 817?820, 2011a.
E. Hazan and S. Kale. Better algorithms for benign bandits. J. Mach. Learn. Res., 12(Apr):1287?1311,
2011b.
T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Adv. in Appl. Math., 6:
4?22, 1985.
G. Neu. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In
Advances in Neural Information Processing Systems 28 (NIPS 2015), 2015a.
G. Neu. First-order regret bounds for combinatorial semi-bandits. In Proceedings of The 28th
Conference on Learning Theory, pages 1360?1375, 2015b.
A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In Proceedings of the 26th
Conference on Learning Theory, pages 993?1019, 2013.
G. Stoltz. Incomplete information and internal regret in prediction of individual sequences. PhD
thesis, Paris-Sud XI University, 2005.
A. Tsybakov. Introduction to nonparametric estimation. Springer Science & Business Media, 2008.
9
| 6277 |@word achievable:1 stronger:1 rigged:1 open:3 q1:8 incurs:1 bs01:1 tuned:2 erven:2 existing:5 com:1 z2:1 surprising:1 comparing:1 allenberg:2 gmail:1 must:5 subsequent:1 informative:1 benign:1 gerchinovitz:2 designed:1 selected:1 beginning:2 ith:1 recherche:1 provides:2 math:4 unbounded:1 symposium:1 prove:11 combine:1 inside:1 excellence:1 expected:3 roughly:1 multi:5 sud:1 inspired:1 alberta:1 little:2 armed:7 spain:1 pof:1 bounded:7 notation:2 qjt:1 project:2 medium:1 kind:2 q2:4 hindsight:1 nj:2 guarantee:6 pseudo:1 every:11 universit:1 k2:1 ensured:1 grant:1 enjoy:1 producing:1 t1:1 before:2 consequence:2 mach:3 oxford:1 ap:1 lugosi:1 might:2 appl:1 range:9 unique:1 acknowledgment:1 regret:44 universal:1 empirical:1 confidence:1 word:1 integrating:1 get:5 cannot:7 close:3 applying:4 equivalent:2 deterministic:1 measurable:2 missing:1 straightforward:3 kale:6 independently:1 immediately:1 contradiction:3 rule:3 variation:10 pt:8 suppose:4 heavily:1 construction:1 imagine:1 trend:1 satisfying:2 perchet:1 pigeonhole:1 observed:2 ep:1 worst:4 cord:1 ensures:1 adv:1 varit:2 observes:1 mentioned:1 environment:1 predictable:1 pinsker:2 depend:6 tight:1 rewrite:2 incur:1 learner:7 translated:1 easily:1 joint:3 differently:1 various:2 univ:1 informatique:1 effective:4 describe:1 outside:1 refined:2 choosing:1 whose:1 supplementary:5 larger:1 otherwise:1 anr:2 toulouse:4 think:1 online:2 eq0:2 sequence:16 s6t:2 fr:1 regularity:1 empty:1 tk:4 derive:3 develop:1 ac:2 qt:5 eq:4 involves:1 implies:3 stochastic:8 material:5 fix:1 hold:8 sufficiently:1 around:1 exp:5 substituting:3 tor:2 purpose:1 estimation:1 combinatorial:1 gaillard:2 robbins:2 tool:3 hope:3 always:4 gaussian:2 corollary:11 derived:2 focus:1 improvement:5 bernoulli:3 impossibility:3 contrast:2 adversarial:7 sense:1 entire:1 her:2 bandit:31 lien:1 france:1 i1:4 issue:1 colt:3 denoted:1 special:1 marginal:1 aware:1 oblivious:3 primarily:1 divergence:6 individual:1 maxj:3 replaced:1 n1:5 interest:1 devoted:1 chain:1 kt:30 partial:1 institut:1 stoltz:4 incomplete:1 re:2 earlier:1 clipping:2 cost:1 subset:1 gr:1 too:1 chooses:2 international:1 siam:1 aur:1 rkt:1 thesis:1 cesa:10 satisfied:1 choose:2 hoeffding:1 expert:2 nonasymptotic:1 de:6 gy:1 casino:1 satisfy:2 audibert:2 depends:3 multiplicative:1 root:3 closed:1 hazan:6 sup:5 start:1 contribution:1 square:3 ni:9 variance:6 qk:3 kaufmann:1 yield:2 famous:1 monitoring:1 randomness:7 history:2 minj:1 suffers:1 whenever:3 neu:6 definition:2 proof:22 sampled:2 recall:2 knowledge:1 improves:1 carefully:2 auer:9 follow:2 improved:4 though:1 furthermore:1 correlation:1 french:1 true:3 counterpart:1 equality:1 hence:1 q0:4 leibler:1 boucheron:2 eg:1 round:14 game:1 width:1 theoretic:1 eqi:2 performs:1 lattimore:2 matiques:2 koolen:1 mt:3 ji:1 rigollet:1 tuning:1 automatic:1 consistency:1 closing:2 centre:1 agence:1 own:1 recent:2 showed:1 inequality:12 vt:5 postponed:1 seen:1 additional:3 surely:2 cesabianchi:1 nwald:1 semi:1 afterwards:1 hannan:1 ii:1 full:4 match:2 exp3:6 adapt:1 lai:2 plugging:2 qi:9 prediction:2 involving:1 optimisation:1 expectation:11 sometimes:3 c1:2 addressed:1 interval:2 source:2 suffered:3 rest:1 massart:1 thisp:1 sridharan:2 spirit:1 noting:5 bernstein:3 independence:2 xj:1 nonstochastic:2 idea:1 knowing:1 minimise:1 qj:26 suffer:1 action:11 remark:2 useful:3 rfi:1 informally:1 cleaner:1 involve:1 nonparametric:1 tsybakov:2 prepared:1 clip:4 concentrated:1 schapire:2 exist:2 key:4 rooij:2 drawn:3 garivier:1 ht:3 asymptotically:1 sum:1 you:2 clipped:1 throughout:1 almost:2 bound:72 guaranteed:1 played:1 quadratic:6 nonnegative:1 annual:1 ahead:1 occur:1 precisely:1 randomisation:7 argument:2 min:7 optimality:1 conjecture:1 department:1 ball:4 smaller:4 across:2 invariant:1 taken:6 previously:1 randomised:6 loose:1 fail:1 know:4 end:2 available:2 rewritten:1 generic:1 appearing:2 alternative:2 existence:4 denotes:3 binomial:1 completed:2 log2:2 exploit:2 giving:1 k1:3 especially:1 prof:2 move:1 question:3 quantity:3 added:1 already:1 strategy:11 concentration:2 rt:40 dependence:1 interacts:1 thank:1 tower:1 ottucs:1 besides:1 mini:2 statement:1 design:1 zt:4 policy:1 unknown:1 sebastien:1 bianchi:10 upper:10 acknowledge:1 situation:1 locate:1 mansour:1 canada:1 complement:1 namely:2 paris:1 kl:16 z1:1 barcelona:1 nip:2 address:1 able:1 adversary:9 below:5 alicia:1 azuma:2 program:1 max:7 including:1 analogue:1 business:1 rely:1 arm:12 minimax:6 improve:1 imply:1 concludes:1 fpl:1 nice:2 literature:1 prior:1 relative:2 law:3 freund:2 loss:60 expect:1 fully:1 interesting:1 proportional:1 allocation:1 facing:1 foundation:2 incurred:1 sufficient:1 principle:1 story:1 pi:1 translation:2 supported:1 last:4 jth:1 ber:4 wide:1 focussed:1 absolute:1 benefit:1 van:2 cumulative:5 concavity:1 author:2 adaptive:1 excess:1 ignore:1 kullback:1 assumed:1 conclude:1 xi:2 thep:1 leader:1 learn:3 obtaining:1 necessarily:1 pk:3 main:2 apr:2 linearly:1 bounding:2 paul:1 advice:1 gambling:1 edmonton:1 exponential:1 comput:1 lie:2 weighting:1 ix:3 theorem:20 bastien:1 specific:1 xt:1 showing:2 insightful:1 maxi:1 rakhlin:2 alt:1 concern:1 exists:17 false:1 phd:1 eq1:6 nationale:1 gap:3 entropy:1 lt:3 logarithmic:3 simply:1 explore:1 bubeck:6 trix:1 partially:1 springer:2 satisfies:2 determines:1 hedge:1 conditional:1 room:2 replace:1 analysing:2 specifically:2 except:1 averaging:2 lemma:18 total:1 called:6 invariance:1 la:1 formally:1 pint:2 internal:9 support:1 evaluate:1 correlated:3 |
5,833 | 6,278 | CRF-CNN: Modeling Structured Information in
Human Pose Estimation
Xiao Chu
The Chinese University of Hong Kong
[email protected]
Wanli Ouyang
The Chinese University of Hong Kong
[email protected]
Hongsheng Li
The Chinese University of Hong Kong
[email protected]
Xiaogang Wang
The Chinese University of Hong Kong
[email protected]
Abstract
Deep convolutional neural networks (CNN) have achieved great success. On
the other hand, modeling structural information has been proved critical in many
vision problems. It is of great interest to integrate them effectively. In a classical
neural network, there is no message passing between neurons in the same layer. In
this paper, we propose a CRF-CNN framework which can simultaneously model
structural information in both output and hidden feature layers in a probabilistic way,
and it is applied to human pose estimation. A message passing scheme is proposed,
so that in various layers each body joint receives messages from all the others
in an efficient way. Such message passing can be implemented with convolution
between features maps in the same layer, and it is also integrated with feedforward
propagation in neural networks. Finally, a neural network implementation of endto-end learning CRF-CNN is provided. Its effectiveness is demonstrated through
experiments on two benchmark datasets.
1
Introduction
A lot of efforts have been devoted to structure design of convolutional neural network (CNN). They
can be divided into two groups. One is to achieve higher expressive power by making CNN deeper
[19, 10, 20]. The other is to model structures among features and outputs, either as post processing
[6, 2] or as extra information to guide the learning of CNN [29, 22, 24]. They are complementary.
Human pose estimation is to estimate body joint locations from 2D images, which could be applied to
assist other tasks such as [4, 14, 26] The very first attempt adopting CNN for human pose estimation
is DeepPose [23]. It used CNN to regress joint locations repeatedly without directly modeling the
output structure. However, the prediction of body joint locations relies both on their own appearance
scores and the prediction of other joints. Hence, the output space for human pose estimation is
structured. Later, Chen and Yuille [2] used a graphical model for the spatial relationship between
body joints and used it as post processing after CNN. Learning CNN features and structured output
together was proposed in [22, 21, 24]. Researchers were also aware of the importance of introducing
structures at the feature level [3]. However, the design of CNN for structured output and structured
features was heuristic, without principled guidance on how information should be passed. As deep
models are shown effective for many practical applications, researchers on statistical learning and
deep learning try to use probabilistic models to illustrate the ideas behind deep models [9, 7, 29].
Motivated by these works, we provide a CRF framework that models structures in both output and
hidden feature layers in CNN, called CRF-CNN. It provides us with a principled illustration on how
to model structured information at various levels in a probabilistic way and what are the assumptions
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
made when incorporating different CRF into CNN. Existing works can be illustrated as special
implementations of CRF-CNN. DeepPose [23] only considered the feature-output relationship, and
the approaches in [2, 22] considered feature-output and output-output relationships. In contrast, our
proposed full CRF-CNN model takes feature-output, output-output, and feature-feature relationships
into consideration, which is novel in pose estimation.
It also facilitates us in borrowing the idea behind the sum-product algorithm and developing a message
passing scheme so that each body joint receives messages from all the others in an efficient way by
saving intermediate messages. Given a set of body joints as vertices on a graph, there is no conclusion
on whether a tree structured model [28, 8] or a loopy structured model [25, 16] is the best choice.
A tree structure has exact inference while a loopy structure can model more complex relationship
among vertices. Our proposed message passing scheme is applicable to both.
Our contributions can be summarized as follows. (1) A CRF is proposed to simultaneously model
structured features and structured body part spatial relationship. We show step by step how approximations are made to use an end-to-end learning CNN for implementing such CRF model. (2)
Motivated by the efficient algorithm for marginalization on tree structures, we provide a message
passing scheme for our CRF-CNN so that every vertex receives messages from all the others in an
efficient way. Message passing can be implemented with convolution between feature maps in the
same layer. Because of the approximation used, this message passing can be used for both tree and
loopy structures. (3) CRF-CNN is applied to two human pose estimation benchmark datasets and
achieve better performance on both dataset compared with previous methods.
2
CRF-CNN
The power of combing powerful statistical models with CNN has been proved [6, 3]. In this section
we start with a brief review of CRF and study how the pose estimation problem can be formulated
under the proposed CRF-CNN framework. It includes estimating body joints independently from
CNN features, modeling the spatial relationship of body joints in the output layer of CNN, and
modeling the spatial relationship of features in the hidden layers of CNN.
Let I denote an image, and z = {z1 , ..., zN } denote locations of N body joints. We are interested in
modeling the conditional probability p(z|I, ?) parameterized by ?, expressed in a Gibbs distribution:
p(z|I, ?) =
e?En(z,I,?)
e?En(z,I,?)
=P
,
?En(z,I,?)
Z
z?Z e
(1)
where En(Z, I, ?) is the energy function. The conditional distribution by introducing latent variables
h = {h1 , h2 , . . . , hK } can be modeled as follows:
p(z|I, ?) =
X
p(z, h|I, ?), where p(z, h|I, ?) = P
h
e?En(z,h,I,?)
?En(z,h,I,?)
z?Z,h?H e
(2)
En(z, h, I, ?) is the energy function to be defined later. The latent variables correspond to features
obtained from a neural network in our implementation. We define an undirected graph G = (V, E),
where V = z ? h, E = Ez ? Eh ? Ezh . Ez , Eh , and Ezh denote sets of edges connecting body joints,
connecting latent variables, and connecting latent variables with body joints, respectively.
2.1
Model 1
Denote ? as an empty set. If we suppose there is no edge connecting joints and no edge connecting
latent variables in the graphical model, i.e. Ez = ?, Eh = ?, then
Y
Y
p(z, h|I, ?) =
p(zi |h, I, ?)
p(hk |I, ?),
(3)
i
En(z, h, I, ?) =
X
k
?zh (zi , hk ) +
X
?h (hk , I),
(4)
k
(i,k)?Ezh
where ?h (?) denotes the unary/data term for image I, ?zh (?, ?) denotes the terms for the correlations
between latent variables h and body joint configurations z. It corresponds to the model in Fig. 1(a)
and it is a typical feedforward neural network.
2
?
"
?
$%&
!
v
v
v
?
v
v
v
?
?
?
$%
$& v
v
v
?
v
v
v
?
#
(a) Multi-layer neural network (b) Structured output space
(c) Structured hidden layer
(d) Our implementation
Figure 1: Different implementations of the CRF-CNN framework.
Example. In DeepPose [23], CNN features h in the top hidden layer were obtained from images, and
could be treated as latent variables and illustrated by term ?h (hk , I) in (4). There is no connection
between neurons in hidden layers. Body joint locations were estimated from CNN features in [23],
which could be illustrated by the term ?zh (zi , hk ). The body joints are independently estimated
without considering their correlations, which means Ez = ?.
2.2
Model 2
If we suppose Eh = ? in the graphical model, p(z, h|I, ?) becomes
Y
p(z, h|I, ?) = p(z|h, I, ?)
p(hk |I, ?).
(5)
k
Compared with (3), joint locations are no longer independent. The energy function for this model is
X
X
X
En(z, h, I, ?) =
?z (zi , zj ) +
?zh (zi , hk ) +
?h (hk , I).
(6)
(i,j)?Ez
i<j
k
(i,k)?Ezh
It corresponds to the model in Fig. 1(b). Compared with (4), ?z (zi , zj ) in (6) is added to model the
pairwise relationship between joints.
Example. To model the spatial relationship among body joints, the approaches in Yang et al. [22]
built up pairwise terms and spatial models. They are different implementations of ?z (zi , zj ) in (6).
2.3
Our model
In our model, h is considered as a set of discrete latent variables and each hk is represented as a
1-of-L L dimensional vector. p(z, h|?) and En(z, h, I, ?) for this model are:
p(z, h|I, ?) = p(z|h, I, ?)p(h|I, ?).
En(z, h, I, ?) =
X
(k,l)?Eh
k<l
?h (hk , hl ) +
X
?z (zi , zj ) +
(i,j)?Ez
i<j
X
(i,k)?Ezh
(7)
?zh (zi , hk ) +
X
?h (hk , I).
k
(8)
It is the model in Fig. 1(c) and exhibits the largest expressive power compared with the models in (4)
and (6). ?h (hk , hl ) is added to model the pairwise relationship among features/latent variables in (8).
Details on the set of edges E. Body joints have structures and it may not be suitable to use a fully
connected graph. The tree structure in Fig. 2(b) is widely used since it fits human knowledge on
the skeleton of body joints and how body parts articulate. A further benefit for a tree structure
with N vertices is that all vertices can receive messages from others with 2N message passing
operations. To better define the structure of latent variables h, we group the latent variables so
that a joint zi corresponds to a particular group of latent variables denoted by hi , and h = ?i hi .
P
PN
(i,k)?Ezh ?zh (zi , hk ) in (8) is simplified into
i=1 ?zh (zi , hi ), i.e. zi is only connected to latent
variables in hi . We further constrain connections among feature groups: (hi , hj ) ? Eh ??
3
(zi , zj ) ? Ez . It means that feature groups are connected if and only if their corresponding body
joints are connected. Fig. 1(d) shows an example of this model. Our implementation is as follows:
En(z, h, I, ?) =
X
?h (hi , hj ) +
(i,j)?Eh
i<j
3
X
?z (zi , zj ) +
N
X
?zh (zi , hi ) +
i=1
(i,j)?Ez
i<j
K
X
?h (hk , I), (9)
k=1
Implementation with neural networks
In order to marginalize latent variables h and obtain p(z|I, ?), the computational complexity of
marginalization in (2) is high, exponentially proportional to the cardinality of h. In order to infer
p(z|I, ?) in a more efficient way, we use the following approximations:
X
X
? I, ?),
p(z|I, ?) =
p(z, h|I, ?) =
p(z|h, I, ?)p(h|I, ?) ? p(z|h,
(10)
h
h
? = [h
? 1, h
? 2, . . . , h
? N ] = E[h] =
where h
X
hp(h|I, ?),
(11)
h
? = E[h] and this approximation was
In (10) and (11), we replace h by its average configuration h
also used in greedy layer-wise learning for deep belief net in [11].
p(h|I, ?) ?
Y
Q(hi |I, ?),
(12)
i
?
?
?
?
?
?
? X
?
X
1
Q(hi |I, ?) =
exp ?
?h (hk , I) ?
?h (hi , Q(hj |I, ?)) .
?
?
Zh,i
?
?
(i,j)?Eh
? hk ?hi
?
(13)
i<j
The target is to marginalize the distribution of h, as shown in 12. We adopt the classical mean-field
approximation approach for message passing[15]. p(h|I, ?) in (11) is approximated by a product of
independent Q(hi |I, ?) in (12) and (13).
We first ignore the pairwise term ?h (hi , hj ) which will be addressed later in Section 3.1. Suppose
?h (hk , I) = hk wkT f , where f is the feature representation of image I. For a binary latent variable hk ,
X
? k = E[hk ] =
h
hk Q(hk |I, ?) = sigm(?h (hk , I)) = sigm(wkT f ),
(14)
hk
? can be
where sigm(x) = 1/(1 + e?x ) is the sigmoid function. Therefore, the mapping from f to h
implemented with one-layer transformation in a neural network and sigmoid is the activation function.
? is a new feature vector derived from f and f can be obtained from lower layers in a network.
h
3.1
Message passing on tree structured latent variables
In order to infer p(z|I, ?), the key challenge in our framework is to obtain the marginalized distribution of hidden units, i.e. , Q(hi |I, ?) in (12). One can obtain Q(hi |I, ?) through message passing
? Then p(z|h,
? I, ?) in (10) can be estimated with existing works such as [2, 28].
and further estimate h
According to the sum-product algorithm for a tree structure, every node can receive the messages
from other nodes through two message passing routes, first from leaves to a root and then from the
root to the leaves [13]. The key is to have a planned route and to store the intermediate messages.
Our proposed messaging passing algorithm is summarized in Algorithm 1. An example of message
passing for a tree structure with 4 nodes as shown in Fig. 2(c). For detailed illustrations of 2, please
refer to the supplementary material.
We drop I and ? to be concise.
4
Algorithm 1 Message passing among features on factor graph.
1: procedure B ELIEF PROPAGATION(?)
2:
Uk ? f ? wk , for k = 1 to K
. Initialization
3:
for m = 1 to M do
. Passing messages M times
4:
Select a predefined message passing route Sm
5:
for e = 1 to |Eh | do
6:
Choose an edge (j ? k) from Eh according to the route Sm
7:
Denote ne(j) as the set of neighboring nodes for node j on the graphical model
8:
if k is a factor node denoted
by fk then
P
9:
Fj?fk ? Uj + fp ?ne(j)\k Ffp ?j
. Pass message from factors to variable
10:
Qj?fk ? ? (?Fj?k )
. Normalize
11:
else
12:
Denote theP
factor node j by fj
13:
Ffj ?k ? p?par(j)\k Qp?fj ? wp?k
. Pass message to the factor
14:
end if
15:
end for
16:
end for
17:
for k = 1 to K do
P
18:
Q(hk ) ? ? (Uk + fp ?ne(k) Ffp ?k )
19:
end for
20: end procedure
?"
?"
&'
&'
?#
?#
&(
?$
(a)
(b)
&(
&)
?%
&)
?$
(c)
?%
(d)
(e)
Figure 2: Message passing. (a) is the annotation of a person with its tree structure. (b) is the tree
structured model employed on the LSP dataset. In (b), the pink colored nodes are linearly interpolated.
(c,d) show message passing on a factored graph with different routes. (e) is a loopy model. In (e), the
edges in green color are extra edges added on the tree structured model in(b).
According to the mean-field approximation algorithm, the above message passing process should
be conducted for multiple times with share parameter to converge. To implement ?h (hi , hj ), we
use matrix multiplication for easier illustration but convolution (which is a special form of matrix
multiplication) for implementation in Algorithm 1. Then message passing is implemented with
convolution between feature maps.
The proposed method is extensible to loopy structured graphs, as shown in Fig. 2(e). The underlying
concept of building up probabilistic model at feature level is the same. However, for loopy structures,
the key challenge is to define the rule in message passing. Either a sequence of asymmetric message
passing order is predefined, which seems not reasonable for symmetric structure of human poses, or
use the flooding scheme to repeated collect information for neighborhood joints. We compared tree
structure with loop structure with flooding scheme in the experimental section.
3.2
Overall picture of CRF-CNN for human pose estimation
An overview of the approach is shown in Fig. 3. In this pipeline, the prediction on ith body part
(1,1) (1,2)
(x,y)
configuration zi is represented by a score map p(zi |h) = {?
zi , z?i , . . .}, where z?i
? [0, 1]
denotes the predicted confidence on the existence of the ith body joint at the location (x, y). Similarly,
? (1,1) , h
? (1,2) , . . .},
? i used for estimating p(z|h) is represented by h
? i = {h
the group of features h
i
i
5
(1)
(2)
(3)
%& $
Channel
.
#
"
$
!
VGG
(3, 4)
?0(1,2)
$
'( (%& $, %&- )
# ,"
#
') ("
$ -)
Until fc6
\{pooling4,5}
%&$
#"
-
'() (%&$ , #" $ )
?
Figure 3: CNN implementation of our model. (1) We use the fc6 layer of VGG to obtain features f
from an image. (2) The features f are then used for passing messages among latent variables h. (3)
? are used for estimating the predicted body part score maps ?
Then the estimated latent variables h
z.
We only show the message passing process between two joints to be concise. Best viewed in color.
? (x,y) is a length-L vector. Therefore, the feature group h
? i is represented by a feature
i = 1, . . . , N . h
i
(x,y)
map of L channels, where hi
contains L channels of features at location (x, y).
1) It comprises a fully convolutional network stage, which takes an image as input and outputs
features f . We use the fully convolutional implementation of VGG and the output of fc6 in VGG is
used as the feature map f .
2) Messages are passed among features h with Algorithm 1. Initially, data term Uk for the kth feature
group is obtained from feature map f by convolution, which is our implementation of term ?h (hk , I)
in (13) and corresponds to Algorithm 1 line 2. Then CNN is used for passing messages among h
using lines 3-19 in Algorithm 1, which implements term ?h (hi , Q(hj |I, ?)) in (13) by convolution.
? i for i = 1, . . . , N is obtained and treated as feature maps to be used.
After message passing, the h
? i for i = 1, . . . , N are used to obtain the score map for in3) Then the feature maps h
(x,y)
ferring
p(z|h, I, ?)
=
with (10).
As a simple example for illustration, we can use z?i
(x,y)
? (x,y) to obtain the predicted score z?(x,y) for the ith part at
p z
= 1|hi , I = sigm wT h
i
i
i
i
? (x,y) is the feature with L channels at location (x, y) and wi can be
location (x, y). In this case, h
i
treated as the classifier. Our implementation uses the approach in [2] to infer p(z|h, I, ?), which also
models the spatial relationship among zi .
During training, a whole image (or many of them) can be used as the mini-batch and the error at each
output location of the network can be computed using an appropriate loss function with respect to the
ground truth of the body joints. We use softmax loss with respect to the estimated part configuration
? i and z
?,a
z as the approximate loss function. Since we have used CNN from input to features f , h
single CNN is used for obtaining the score map of body joints from the image. End-to-end learning
with softmax loss and standard BP is used.
4
Experiment
We conduct experiments on two benchmark datasets: the LSP dataset [12] and the FLIC dataset [18].
LSP contains 2, 000 images. 1, 000 images for training and 1, 000 for test. Each person is annotated
with 14 joints. FLIC contains 3, 987 training images and 1, 016 testing images from Hollywood
movies with upper body annotated. On both datasets, we use observer centric annotation for training
and evaluation. We also use negative samples, i.e. images not containing any person, from the INRIA
dataset [5]. In summary, we are consistent with Chen et al. [2] in training data preparation.
6
4.1
Results on the LSP dataset
The experimental results for our and previous approaches on LSP are shown in Table 1. For evaluation
metric, we choose the prevailing evaluation method: strict Percentage of Correct Parts (PCP). Under
this metric, a limb is considered to be detected only if both ends of an limb lie in 50% of the length
w.r.t. the ground-truth location. For pose estimation, it is well known that the accuracy of CNN
features is higher than handcrafted features. Therefore, we only compare with methods that use CNN
features to be concise. Pishchulin et al. [17] use extra training data, so we do not compare with
it. Yang et al. [27] learned features and structured body part configurations simultaneously. Our
performance is better than them because we model structure among features. Chu et al. [3] learned
structured features and heuristically defined a message passing scheme. Using only the LSP training
data, these two approaches have the highest PCP (Observer-Centric) reported in [1]. The model in
[3] has no probabilistic interpretation and cannot be modeled as CRF. Most vertices in their CNN
can only receive information from half of the vertices, while in our message passing scheme each
node could receive information from all vertices, since it is developed from CRF and the sum-product
algorithm. The approaches in [27, 3] are all based on the VGG structure as ours. By using a more
effective message passing scheme, our method reduces the mean error rate by 10%.
Table 1: Quantitative results on LSP dataset (PCP)
Experiment
Chen&Yuille [2]
Yang et al. [27]
Chu et al. [3]
Ours
4.2
Torso
92.7
96.5
95.4
96.0
Head
87.8
83.1
89.6
91.3
U.arms
69.2
78.8
76.9
80.0
L.arms
55.4
66.7
65.2
67.1
U.legs
82.9
88.7
87.6
89.5
L.legs
77.0
81.7
83.2
85.0
Mean
75.0
81.1
81.1
83.1
Results on the FLIC dataset
We use Percentage of Correct Keypoints (PCK) as the evaluation metric. Because it is widely adopted
by previous works on FLIC, it provides convenience for comparison. These published works only
reported results on elbow and wrist and we follow the same practice. PCK reports the percentage of
predictions that lay in the normalized distance of annotation. Toshev et al. [23], Chen and Yuille [2]
and Tompson et al. [21] also used CNN features. When compared with previous state of the art, our
method improves the performance of elbow and wrist by 2.7% and 1.7% respectively.
Table 2: Quantitative results on FLIC dataset ([email protected])
Experiment
Toshev et al. [23]
Tompson et al. [21]
Chen and Yuille [2]
Ours
4.3
Elbow
92.3
93.1
95.3
98.0
Wrist
82.0
89.0
92.4
94.1
Diagnostic Experiments
In this subsection, we conduct experiments to compare different message passing schemes, structures,
and noniliear functions. The experimental results in Table 3 use the same VGG for feature extraction.
Flooding is a message passing schedule, in which all vertices pass the messages to their neighboring
vertices simultaneously and locally as follows:
?
?
X
0
Qt+1 (hi ) = ? ??(hi ) +
Qt (hi0 ) ? wi ?i ? ,
(15)
i0 ?VN (i) \i
where VN (i) denotes the neighboring vertices of the ith vertex in the graphical model. We adopt the
iterative updating scheme in the work of Zheng et al. [29].
In Table 3, Flooding-1itr-tree denotes the result of using flooding to perform message passing once
using CNN as in [29]. The tree structure in Fig. 2 (b) is adopted. Flooding-2iter-tree indicates
7
Table 3: Diagnostic Experiments (PCP)
Experiment
Flooding-1iter-tree
Flooding-2iter-tree
Flooding-2iter-loopy
Serial-tree(ReLU)
Serial-tree(Softmax)
Torso
93.0
93.5
94.0
95.5
96.0
Head
87.5
86.7
88.2
88.9
91.3
U.arms
73.0
73.0
74.4
75.9
80.0
L.arms
58.9
59.8
62.1
63.8
67.1
U.legs
84.3
83.7
84.3
87.1
89.5
L.legs
76.4
79.0
80.0
81.4
85.0
Mean
76.6
77.1
78.4
80.1
83.1
the result of using flooding to pass messages twice. The weights across the two message passing
iterations are shared. Experimental results show slight improvement of passing twice than once.
The result for the loopy structured graph in Fig. 2 (e) is denoted by Flooding-2iter-loopy. The
connection of a pair of joints is decided by the following protocol: if 90% of the training sample?s
distance is within 48 pixels, which is the receptive field size of our filters, we connect these two joints.
Improvement of 1.3% is introduced by these extra connections.
These approaches share the same drawbacks: lack of information for making predictions. With one
iteration of message passing, each body part could only receive information from neighborhood parts,
while with two iterations a part can only receive information from parts of depth 2. However, the
largest depth in our graph is 10. Flooding is inefficient for a node to receive the messages from the
other nodes. This problem is solved with the serial scheme.
Serial scheme passes messages following a predefined order and update information sequentially.
For a tree structured graph with N vertices, each vertex can be marginalized by passing the messages
within 2N operations using the efficient sum-product algorithm [13]. The result of using serial
message passing is denoted by Serial-tree(Softmax) in Table 3. In can be shown that the serial scheme
performs better than the flooding scheme.
It is well known that softmax leads to vanishing of gradients which make the network training
inefficient. In experiment, we replace z1 e{x} with ? z1 e{?x} to accelerate the training process. We
set ? ? 0.5 and ? ? Nc , where Nc is the number of feature channels. With this slight change, the
network can converge much faster than softmax without using ? and ?. The performance of using
this softmax, which is derived from our CRF in (13), is 3% higher than Serial-tree(ReLU), which
uses ReLU as the non-linear function for passing messages among features, a scheme used in [3].
5
Conclusion
We propose to use CRF for modeling structured features and structured human body part configurations. This CRF is implemented by an end-to-end learning CNN. The efficient sum-product algorithm
in the probabilistic model guides us in using an efficient message passing approach so that each
vertex receives messages from other nodes in a more efficient way. And the use of CRF also helps
us to choose non-linear functions and to know what are the assumptions and approximations made
in order to use CNN to implement such CRF. The gain in performance on two benchmark human
pose estimation datasets proves the effectiveness of this attempt, which shows a new direction for the
structure design of deep neural networks.
Acknowledgment: This work is supported by SenseTime Group Limited, Research Grants Council of
Hong Kong (Project Number CUHK14206114, CUHK14205615, CUHK14207814, CUHK14203015,
and CUHK417011) and National Natural Science Foundation of China (Number 61371192 and
61301269). W. Ouyang and X. Wang are the corresponding authors.
References
[1] Mpii human pose dataset. http://human-pose.mpi-inf.mpg.de/#related_benchmarks. Accessed:
2016-05-20.
[2] X. Chen and A. L. Yuille. Articulated pose estimation by a graphical model with image dependent pairwise
relations. In NIPS, 2014.
[3] X. Chu, W. Ouyang, H. Li, and X. Wang. Structured feature learning for pose estimation. In CVPR, 2016.
8
[4] X Chu, W Ouyang, W Yang, and X Wang. Multi-task recurrent neural network for immediacy prediction.
In ICCV, 2015.
[5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[6] J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam. Large-scale
object classification using label relation graphs. In ECCV. 2014.
[7] SM Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast
scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016.
[8] P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. IJCV, 61(1):55?79,
2005.
[9] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep
learning. arXiv preprint arXiv:1506.02142, 2015.
[10] K. He, X. Zhang, Ss Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on
imagenet classification. arXiv preprint arXiv:1502.01852, 2015.
[11] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18:1527?1554, 2006.
[12] S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose
estimation. In BMVC, 2010.
[13] F. Kschischang, B. Frey, and H-A Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498?519, 2001.
[14] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. Deepreid: Deep filter pairing neural network for
person re-identification. In CVPR, 2014.
[15] G. Lin, C. Shen, I. Reid, and A. van den Hengel. Deeply learning the messages in message passing
inference. In NIPS, 2015.
[16] W. Ouyang, X. Chu, and X. Wang. Multi-source deep learning for human pose estimation. In CVPR, 2014.
[17] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, and B.b Schiele. Deepcut:
Joint subset partition and labeling for multi person pose estimation. [arXiv], November 2015.
[18] B. Sapp and B. Taskar. Modec: Multimodal decomposable models for human pose estimation. In CVPR,
2013.
[19] K. Simonyan and A.s Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In CVPR, 2015.
[21] J. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler. Efficient object localization using convolutional
networks. In CVPR, 2015.
[22] J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical
model for human pose estimation. In NIPS, 2014.
[23] A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In CVPR, 2014.
[24] L. Wan, D. Eigen, and R. Fergus. End-to-end integration of a convolutional network, deformable parts
model and non-maximum suppression. arXiv preprint arXiv:1411.5309, 2014.
[25] Y. Wang, D. Tran, and Z. Liao. Learning hierarchical poselets for human parsing. In CVPR, 2011.
[26] Tong Xiao, Hongsheng Li, Wanli Ouyang, and Xiaogang Wang. Learning deep feature representations
with domain guided dropout for person re-identification. In CVPR, 2016.
[27] W. Yang, W. Ouyang, H. Li, and X. Wang. End-to-end learning of deformable mixture of parts and deep
convolutional neural networks for human pose estimation. In CVPR, 2016.
[28] Y. Yang and D. Ramanan. Articulated human detection with flexible mixtures of parts. PAMI, 35(12):2878?
2890, 2013.
[29] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional
random fields as recurrent neural networks. In ICCV, 2015.
9
| 6278 |@word kong:5 cnn:41 dalal:1 seems:1 paredes:1 everingham:1 triggs:1 heuristically:1 concise:3 configuration:6 contains:3 score:6 liu:1 loeliger:1 ours:3 romera:1 existing:2 activation:1 chu:6 pcp:4 parsing:1 partition:1 drop:1 update:1 greedy:1 leaf:2 half:1 generative:1 ith:4 vanishing:1 colored:1 provides:2 node:12 location:12 accessed:1 zhang:1 pairing:1 ijcv:1 pairwise:5 mpg:1 multi:4 considering:1 cardinality:1 becomes:1 provided:1 spain:1 estimating:3 underlying:1 elbow:3 project:1 what:2 ouyang:7 developed:1 transformation:1 gal:1 quantitative:2 every:2 classifier:1 uk:3 unit:1 grant:1 ramanan:1 reid:1 attend:1 frey:1 eslami:1 deeppose:4 pami:1 inria:1 twice:2 initialization:1 china:1 collect:1 limited:1 decided:1 practical:1 acknowledgment:1 lecun:2 testing:1 wrist:3 practice:1 implement:3 procedure:2 confidence:1 sensetime:1 cannot:1 marginalize:2 convenience:1 map:12 demonstrated:1 independently:2 shen:1 decomposable:1 factored:1 rule:1 target:1 suppose:3 exact:1 us:2 approximated:1 recognition:2 updating:1 lay:1 asymmetric:1 huttenlocher:1 gehler:1 taskar:1 preprint:5 ding:1 wang:9 solved:1 connected:4 sun:1 highest:1 deeply:1 principled:2 complexity:1 skeleton:1 hi0:1 schiele:1 yuille:5 localization:1 accelerate:1 joint:34 multimodal:1 various:2 represented:4 sigm:4 articulated:2 jain:2 fast:2 effective:2 detected:1 labeling:1 neighborhood:2 heuristic:1 widely:2 supplementary:1 cvpr:11 s:1 simonyan:1 sequence:1 net:2 propose:2 tran:1 product:7 neighboring:3 loop:1 achieve:2 deformable:2 normalize:1 empty:1 deepcut:1 adam:1 object:3 help:1 illustrate:1 recurrent:2 pose:24 qt:2 implemented:5 predicted:3 frome:1 goroshin:1 poselets:1 direction:1 guided:1 drawback:1 annotated:2 correct:2 filter:2 human:23 material:1 implementing:1 clustered:1 articulate:1 bregler:2 considered:4 ground:2 exp:1 great:2 mapping:1 adopt:2 estimation:20 applicable:1 label:1 council:1 largest:2 hollywood:1 pn:1 hj:6 derived:2 improvement:2 indicates:1 hk:32 contrast:1 suppression:1 inference:2 dependent:1 unary:1 i0:1 integrated:1 neven:1 initially:1 hidden:7 borrowing:1 relation:2 going:1 interested:1 pixel:1 overall:1 among:12 classification:2 flexible:1 denoted:4 art:1 spatial:7 special:2 softmax:7 prevailing:1 field:4 aware:1 saving:1 extraction:1 once:2 integration:1 others:4 report:1 oriented:1 simultaneously:4 national:1 murphy:1 pictorial:1 attempt:2 hongsheng:2 detection:2 interest:1 message:56 zheng:2 evaluation:4 mixture:2 tompson:4 behind:2 devoted:1 predefined:3 edge:7 tree:23 conduct:2 re:2 guidance:1 modeling:7 planned:1 extensible:1 zn:1 rabinovich:1 loopy:9 introducing:2 vertex:15 subset:1 conducted:1 osindero:1 johnson:1 reported:2 connect:1 person:6 vineet:1 probabilistic:6 together:1 connecting:5 containing:1 choose:3 wan:1 huang:1 messaging:1 inefficient:2 zhao:1 li:6 combing:1 szegedy:2 de:1 summarized:2 wk:1 includes:1 later:3 try:1 observer:2 lot:1 h1:1 root:2 start:1 annotation:3 jia:2 contribution:1 accuracy:1 convolutional:9 correspond:1 bayesian:1 identification:2 kavukcuoglu:1 andres:1 ren:1 researcher:2 published:1 energy:3 regress:1 gain:1 proved:2 dataset:10 knowledge:1 color:2 improves:1 torso:2 subsection:1 schedule:1 sapp:1 centric:2 higher:3 flooding:13 follow:1 zisserman:1 wei:1 bmvc:1 stage:1 correlation:2 until:1 hand:1 receives:4 expressive:2 su:1 nonlinear:1 propagation:2 lack:1 jayasumana:1 building:1 deepreid:1 normalized:1 concept:1 hence:1 symmetric:1 wp:1 ffp:2 illustrated:3 during:1 please:1 mpi:1 hong:5 pck:3 crf:24 performs:1 fj:4 image:16 wise:1 consideration:1 novel:1 weber:1 sigmoid:2 lsp:7 qp:1 overview:1 handcrafted:1 exponentially:1 tassa:1 interpretation:1 slight:2 he:1 surpassing:1 wanli:2 refer:1 anguelov:1 gibbs:1 fk:3 hp:1 similarly:1 longer:1 own:1 inf:1 route:5 store:1 binary:1 success:1 employed:1 deng:1 converge:2 cuhk:4 full:1 multiple:1 keypoints:1 infer:4 reduces:1 faster:1 lin:1 divided:1 post:2 serial:8 prediction:6 liao:1 vision:1 metric:3 arxiv:11 iteration:3 histogram:1 adopting:1 achieved:1 receive:7 addressed:1 else:1 source:1 extra:4 strict:1 wkt:2 pass:1 facilitates:1 undirected:1 effectiveness:2 ee:4 structural:2 yang:6 feedforward:2 intermediate:2 bengio:1 marginalization:2 fit:1 zi:20 relu:3 idea:2 itr:1 vgg:6 qj:1 whether:1 motivated:2 assist:1 passed:2 effort:1 passing:42 repeatedly:1 deep:15 heess:1 detailed:1 locally:1 http:1 percentage:3 zj:6 estimated:5 diagnostic:2 discrete:1 group:9 key:3 iter:5 graph:11 sum:6 parameterized:1 powerful:1 uncertainty:1 reasonable:1 vn:2 dropout:2 layer:16 hi:21 xiaogang:3 constrain:1 bp:1 scene:1 xgwang:1 interpolated:1 toshev:3 structured:23 developing:1 according:3 pink:1 across:1 wi:2 making:2 hl:2 leg:4 den:1 iccv:2 pipeline:1 know:1 end:17 adopted:2 operation:2 limb:2 hierarchical:1 appropriate:1 batch:1 eigen:1 existence:1 denotes:5 top:1 graphical:7 marginalized:2 ghahramani:1 chinese:4 uj:1 prof:1 classical:2 added:3 receptive:1 exhibit:1 gradient:2 kth:1 distance:2 length:2 modeled:2 relationship:12 illustration:4 mini:1 modec:1 sermanet:1 reed:1 nc:2 negative:1 implementation:13 design:3 perform:1 teh:1 upper:1 neuron:2 convolution:7 datasets:5 sm:3 benchmark:4 november:1 hinton:2 head:2 introduced:1 pair:1 z1:3 connection:4 imagenet:1 andriluka:1 learned:2 barcelona:1 nip:4 in3:1 fp:2 challenge:2 built:1 green:1 belief:2 endto:1 power:3 critical:1 suitable:1 treated:3 eh:10 natural:1 arm:4 representing:1 scheme:16 pishchulin:2 movie:1 brief:1 ne:3 picture:1 review:1 understanding:1 zh:9 multiplication:2 immediacy:1 fully:3 par:1 loss:4 proportional:1 h2:1 integrate:1 foundation:1 vanhoucke:1 consistent:1 xiao:3 flic:5 share:2 eccv:1 summary:1 supported:1 repeat:1 guide:2 deeper:2 felzenszwalb:1 benefit:1 van:1 depth:2 hengel:1 author:1 made:3 simplified:1 erhan:1 transaction:1 approximate:1 ignore:1 sequentially:1 fergus:1 thep:1 latent:18 iterative:1 mpii:1 table:7 channel:5 fc6:3 delving:1 kschischang:1 obtaining:1 du:1 complex:1 protocol:1 domain:1 linearly:1 whole:1 repeated:1 complementary:1 body:29 fig:10 insafutdinov:1 en:12 tong:2 elief:1 comprises:1 lie:1 tang:1 rectifier:1 incorporating:1 effectively:1 importance:1 rui:1 chen:6 easier:1 appearance:2 ez:8 expressed:1 ffj:1 corresponds:4 truth:2 relies:1 conditional:3 viewed:1 formulated:1 replace:2 shared:1 change:1 typical:1 torr:1 wt:1 called:1 pas:4 experimental:4 select:1 preparation:1 |
5,834 | 6,279 | Natural-Parameter Networks:
A Class of Probabilistic Neural Networks
Hao Wang, Xingjian Shi, Dit-Yan Yeung
Hong Kong University of Science and Technology
{hwangaz,xshiab,dyyeung}@cse.ust.hk
Abstract
Neural networks (NN) have achieved state-of-the-art performance in various applications. Unfortunately in applications where training data is insufficient, they are
often prone to overfitting. One effective way to alleviate this problem is to exploit
the Bayesian approach by using Bayesian neural networks (BNN). Another shortcoming of NN is the lack of flexibility to customize different distributions for the
weights and neurons according to the data, as is often done in probabilistic graphical models. To address these problems, we propose a class of probabilistic neural
networks, dubbed natural-parameter networks (NPN), as a novel and lightweight
Bayesian treatment of NN. NPN allows the usage of arbitrary exponential-family
distributions to model the weights and neurons. Different from traditional NN
and BNN, NPN takes distributions as input and goes through layers of transformation before producing distributions to match the target output distributions. As
a Bayesian treatment, efficient backpropagation (BP) is performed to learn the
natural parameters for the distributions over both the weights and neurons. The
output distributions of each layer, as byproducts, may be used as second-order
representations for the associated tasks such as link prediction. Experiments on
real-world datasets show that NPN can achieve state-of-the-art performance.
1
Introduction
Recently neural networks (NN) have achieved state-of-the-art performance in various applications
ranging from computer vision [12] to natural language processing [20]. However, NN trained by
stochastic gradient descent (SGD) or its variants is known to suffer from overfitting especially
when training data is insufficient. Besides overfitting, another problem of NN comes from the
underestimated uncertainty, which could lead to poor performance in applications like active learning.
Bayesian neural networks (BNN) offer the promise of tackling these problems in a principled way.
Early BNN works include methods based on Laplace approximation [16], variational inference (VI)
[11], and Monte Carlo sampling [18], but they have not been widely adopted due to their lack of
scalability. Some recent advances in this direction seem to shed light on the practical adoption of
BNN. [8] proposed a method based on VI in which a Monte Carlo estimate of a lower bound on the
marginal likelihood is used to infer the weights. Recently, [10] used an online version of expectation
propagation (EP), called ?probabilistic back propagation? (PBP), for the Bayesian learning of NN,
and [4] proposed ?Bayes by Backprop? (BBB), which can be viewed as an extension of [8] based on
the ?reparameterization trick? [13]. More recently, an interesting Bayesian treatment called ?Bayesian
dark knowledge? (BDK) was designed to approximate a teacher network with a simpler student
network based on stochastic gradient Langevin dynamics (SGLD) [1].
Although these recent methods are more practical than earlier ones, several outstanding problems
remain to be addressed: (1) most of these methods require sampling either at training time [8, 4, 1] or
at test time [4], incurring much higher cost than a ?vanilla? NN; (2) as mentioned in [1], methods
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
based on online EP or VI do not involve sampling, but they need to compute the predictive density
by integrating out the parameters, which is computationally inefficient; (3) these methods assume
Gaussian distributions for the weights and neurons, allowing no flexibility to customize different
distributions according to the data as is done in probabilistic graphical models (PGM).
To address the problems, we propose natural-parameter networks (NPN) as a class of probabilistic
neural networks where the input, target output, weights, and neurons can all be modeled by arbitrary
exponential-family distributions (e.g., Poisson distributions for word counts) instead of being limited
to Gaussian distributions. Input distributions go through layers of linear and nonlinear transformation
deterministically before producing distributions to match the target output distributions (previous
work [21] shows that providing distributions as input by corrupting the data with noise plays the
role of regularization). As byproducts, output distributions of intermediate layers may be used as
second-order representations for the associated tasks. Thanks to the properties of the exponential
family [3, 19], distributions in NPN are defined by the corresponding natural parameters which can
be learned efficiently by backpropagation. Unlike [4, 1], NPN explicitly propagates the estimates of
uncertainty back and forth in deep networks. This way the uncertainty estimates for each layer of
neurons are readily available for the associated tasks. Our experiments show that such information is
helpful when neurons of intermediate layers are used as representations like in autoencoders (AE). In
summary, our main contributions are:
? We propose NPN as a class of probabilistic neural networks. Our model combines the merits
of NN and PGM in terms of computational efficiency and flexibility to customize the types
of distributions for different types of data.
? Leveraging the properties of the exponential family, some sampling-free backpropagationcompatible algorithms are designed to efficiently learn the distributions over weights by
learning the natural parameters.
? Unlike most probabilistic NN models, NPN obtains the uncertainty of intermediate-layer
neurons as byproducts, which provide valuable information to the learned representations.
Experiments on real-world datasets show that NPN can achieve state-of-the-art performance
on classification, regression, and unsupervised representation learning tasks.
2
Natural-Parameter Networks
The exponential family refers to an important class of distributions with useful algebraic properties.
Distributions in the exponential family have the form p(x|?) = h(x)g(?) exp{? T u(x)}, where x is
the random variable, ? denotes the natural parameters, u(x) is a vector of sufficient statistics, and
g(?) is the normalizer. For a given type of distributions, different choices of ? lead to different shapes.
c
1
For example, a univariate Gaussian distribution with ? = (c, d)T corresponds to N (? 2d
, ? 2d
).
Motivated by this observation, in NPN, only the natural parameters need to be learned to model the
distributions over the weights and neurons. Consider an NPN which takes a vector random distribution
(e.g., a multivariate Gaussian distribution) as input, multiplies it by a matrix random distribution,
goes through nonlinear transformation, and outputs another distribution. Since all three distributions
in the process can be specified by their natural parameters (given the types of distributions), learning
and prediction of the network can actually operate in the space of natural parameters. For example, if
we use element-wise (factorized) gamma distributions for both the weights and neurons, the NPN
counterpart of a vanilla network only needs twice the number of free parameters (weights) and
neurons since there are two natural parameters for each univariate gamma distribution.
2.1
Notation and Conventions
We use boldface uppercase letters like W to denote matrices and boldface lowercase letters like
b for vectors. Similarly, a boldface number (e.g., 1 or 0) represents a row vector or a matrix with
identical entries. In NPN, o(l) is used to denote the values of neurons in layer l before nonlinear
transformation and a(l) is for the values after nonlinear transformation. As mentioned above, NPN
tries to learn distributions over variables rather than variables themselves. Hence we use letters
without subscripts c, d, m, and s (e.g., o(l) and a(l) ) to denote ?random variables? with corresponding
distributions. Subscripts c and d are used to denote natural parameter pairs, such as Wc and Wd .
Similarly, subscripts m and s are for mean-variance pairs. Note that for clarity, many operations used
?z
below are implicitly element-wise, for example, the square z2 , division bz , partial derivative ?b
, the
2
gamma function ?(z), logarithm log z, factorial z!, 1 + z, and z1 . For the data D = {(xi , yi )}N
i=1 ,
(0)
(0)
(0)
we set am = xi , as = 0 (Input distributions with as 6= 0 resemble AE?s denoising effect.) as
input of the network and yi denotes the output targets (e.g., labels and word counts). In the following
text we drop the subscript i (and sometimes the superscript (l)) for clarity. The bracket (?, ?) denotes
concatenation or pairs of vectors.
2.2
Linear Transformation in NPN
Here we first introduce the linear form of a general NPN. For simplicity, we assume distributions
with two natural parameters (e.g., gamma distributions, beta distributions, and Gaussian distributions), ? = (c, d)T , in this section. Specifically, we have factorized distributions on the weight
Q
(l)
(l)
(l)
(l)
(l)
(l)
(l)
matrices, p(W(l) |Wc , Wd ) = i,j p(Wij |Wc,ij , Wd,ij ), where the pair (Wc,ij , Wd,ij ) is the
corresponding natural parameters. For b(l) , o(l) , and a(l) we assume similar factorized distributions.
In a traditional NN, the linear transformation follows o(l) = a(l?1) W(l) + b(l) where a(l?1) is the
output from the previous layer. In NN a(l?1) , W(l) , and b(l) are deterministic variables while in
NPN they are exponential-family distributions, meaning that the result o(l) is also a distribution. For
convenience of subsequent computation it is desirable to approximate o(l) using another exponentialfamily distribution. We can do this by matching the mean and variance. Specifically, after computing
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(Wm , Ws ) = f (Wc , Wd ) and (bm , bs ) = f (bc , bd ), we can get oc and od through
(l)
(l)
the mean om and variance os of o(l) as follows:
(l?1)
(a(l?1)
, a(l?1)
) = f (a(l?1)
, ad
m
s
c
o(l)
s
(l)
(o(l)
,
o
c
d )
=
a(l?1)
Ws(l)
s
=f
?1
+
(l?1)
(l)
), o(l)
Wm
+ b(l)
m = am
m,
(l)
a(l?1)
(Wm
s
?
(l)
Wm
)
+
(a(l?1)
m
(1)
?
a(l?1)
)Ws(l)
m
+
b(l)
s ,
(2)
(l)
(o(l)
m , os ),
(3)
where ? denotes the element-wise product and the bijective function f (?, ?) maps the natural paramec+1
ters of a distribution into its mean and variance (e.g., f (c, d) = ( c+1
?d , d2 ) in gamma distributions).
(l)
(l)
(l)
(l)
Similarly we use f ?1 (?, ?) to denote the inverse transformation. Wm , Ws , bm , and bs are the
(l)
mean and variance of W(l) and b(l) obtained from the natural parameters. The computed om and
(l)
(l)
(l)
os can then be used to recover oc and od , which will subsequently facilitate the feedforward
computation of the nonlinear transformation described in Section 2.3.
2.3
Nonlinear Transformation in NPN
(l)
After we obtain the linearly transformed distribution over o(l) defined by natural parameters oc and
(l)
od , an element-wise nonlinear transformation v(?) (with a well defined inverse function v ?1 (?)) will
0
be imposed. The resulting activation distribution is pa (a(l) ) = po (v ?1 (a(l) ))|v ?1 (a(l) )|, where po
(l)
(l)
is the factorized distribution over o(l) defined by (oc , od ).
Though pa (a(l) ) may not be an exponential-family distribution, we can approximate it with one,
(l)
(l)
(l)
(l)
p(a(l) |ac , ad ), by matching the first two moments. Once the mean am and variance as of pa (a(l) )
?1
are obtained, we can compute corresponding natural parameters with f (?, ?) (approximation
accuracy is sufficient according to preliminary experiments). The feedforward computation is:
Z
am =
Z
po (o|oc , od )v(o)do, as =
po (o|oc , od )v(o)2 do ? a2m , (ac , ad ) = f ?1 (am , as ).
(4)
Here the key computational challenge is computing the integrals in Equation (4). Closed-form
solutions are needed for their efficient computation. If po (o|oc , od ) is a Gaussian distribution, closedform solutions exist for common activation functions like tanh(x) and max(0, x) (details are in
Section 3.2). Unfortunately this is not the case for other distributions. Leveraging the convenient
form of the exponential family, we find that it is possible to design activation functions so that the
integrals for non-Gaussian distributions can also be expressed in closed form.
Theorem 1. Assume an exponential-family distribution po (x|?) = h(x)g(?) exp{? T u(x)}, where
the vector u(x) = (u1 (x), u2 (x), . . . , uM (x))T (M is the number of natural parameters).
If activaR
tion function v(x) = r ? q exp(?? ui (x)) is used, the first two moments of v(x), po (x|?)v(x)dx
3
Table 1: Activation Functions for Exponential-Family Distributions
Distribution
Probability Density Function
Beta Distribution
p(x) =
Rayleigh Distribution
Gamma Distribution
p(x) =
p(x) =
Poisson Distribution
p(x) =
Gaussian Distribution
p(x) =
R
?(c+d)
xc?1 (1 ? x)d?1
?(c)?(d)
x
x2
exp{?
}
?2
2? 2
c c?1
1
d
x
exp{?dx}
?(c)
cx exp{?c}
x!
1
(2?? 2 )? 2 exp{? 2?12 (x ?
?)2 }
Activation Function
Support
qx? , ? ? (0, 1)
[0, 1]
r ? q exp{?? x2 }
r ? q exp{?? x}
(0, +?)
(0, +?)
r ? q exp{?? x}
Nonnegative interger
ReLU, tanh, and sigmoid
(??, +?)
2
and po (x|?)v(x) dx, can be expressed in closed form. Here i ? {1, 2, . . . , M } (different ui (x)
corresponds to a different set of activation functions) and r, q, and ? are constants.
e = (?1 , ?2 , . . . , ?i ? ?, . . . , ?M ), and ?
b =
Proof. We first let ? = (?1 , ?2 , . . . , ?M ), ?
(?1 , ?2 , . . . , ?i ? 2?, . . . , ?M ). The first moment of v(x) is
Z
E(v(x)) = r ? q h(x)g(?) exp{? T u(x) ? ? ui (x)} dx
Z
g(?)
g(?)
g(e
? ) exp{e
? T u(x)} dx = r ? q
.
= r ? q h(x)
g(e
?)
g(e
?)
g(? )
g(? )
Similarly the second moment can be computed as E(v(x)2 ) = r2 + q 2 g(?
b ) ? 2rq g(?
e) .
A more detailed proof is provided in the supplementary material. With Theorem 1, what remains is to
find the constants that make v(x) strictly increasing and bounded (Table 1 shows some exponentialfamily distributions and their possible activation functions). For example in Equation (4), if v(x) =
d
r ? q exp(?? x), am = r ? q( odo+?
)oc for the gamma distribution.
In the backpropagation, for distributions with two natural parameters the gradient consists of two
?as
?E
?E
?E
m
terms. For example, ?o
= ?a
? ?a
?oc + ?as ? ?oc , where E is the error term of the network.
c
m
Algorithm 1 Deep Nonlinear NPN
1: Input: Data D = {(xi , yi )}N
i=1 , number of iterations T , learning rate ?t , number of layers L.
2: for t = 1 : T do
3:
for l = 1 : L do
4:
Apply Equation (1)-(4) to compute the linear and nonlinear transformation in layer l.
5:
end for
(L)
(L)
(L)
(L)
6:
Compute the error E from (oc , od ) or (ac , ad ).
7:
for l = L : 1 do
8:
Compute ?E(l) , ?E(l) , ?E(l) , and ?E(l) . Compute ?E(l) , ?E(l) , ?E(l) , and ?E(l) .
?Wm
?Ws
?bm
?bs
?Wc
?Wd
?bc
?bd
9:
end for
(l)
(l)
(l)
(l)
10:
Update Wc , Wd , bc , and bd in all layers.
11: end for
2.4
Deep Nonlinear NPN
Naturally layers of nonlinear NPN can be stacked to form a deep NPN1 , as shown in Algorithm 12 . A
deep NPN is in some sense similar to a PGM with a chain structure. Unlike PGM in general, however,
NPN does not need costly inference algorithms like variational inference or Markov chain Monte
Carlo. For some chain-structured PGM (e.g, hidden Markov models), efficient inference algorithms
also exist due to their special structure. Similarly, the Markov property enables NPN to be efficiently
trained in an end-to-end backpropagation learning fashion in the space of natural parameters.
PGM is known to be more flexible than NN in the sense that it can choose different distributions to
depict different relationships among variables. A major drawback of PGM is its scalability especially
1
Although the approximation accuracy may decrease as NPN gets deeper during feedforward computation, it
can be automatically adjusted according to data during backpropagation.
2
Note that since the first part of Equation (1) and the last part of Equation (4) are canceled out, we can
(l)
(l)
(l)
(l)
directly use (am , as ) without computing (ac , ad ) here.
4
when the PGM is deep. Different from PGM, NN stacks relatively simple computational layers and
learns the parameters using backpropagation, which is computationally more efficient than most
algorithms for PGM. NPN has the potential to get the best of both worlds. In terms of flexibility,
different types of exponential-family distributions can be chosen for the weights and neurons. Using
gamma distributions for both the weights and neurons in NPN leads to a deep and nonlinear version
of nonnegative matrix factorization [14] while an NPN with the Bernoulli distribution and sigmoid
activation resembles a Bayesian treatment of sigmoid belief networks [17]. If Poisson distributions
are chosen for the neurons, NPN becomes a neural analogue of deep Poisson factor analysis [26, 9].
Note that similar to the weight decay in NN, we may add the KL divergence between the prior
distributions and the learned distributions on the weights to the error E for regularization (we use
isotropic Gaussian priors in the experiments). In NPN, the chosen prior distributions correspond to
priors in Bayesian models and the learned distributions correspond to the approximation of posterior
distributions on weights. Note that the generative story assumed here is that weights are sampled
from the prior, and then output is generated (given all data) from these weights.
3
Variants of NPN
In this section, we introduce three NPN variants with different properties to demonstrate the flexibility
and effectiveness of NPN. Note that in practice we use a transformed version of the natural parameters,
referred to as proxy natural parameters here, instead of the original ones for computational efficiency.
For example, in gamma distributions p(x|c, d) = ?(c)?1 dc xc?1 exp(?dx), we use proxy natural
parameters (c, d) during computation rather than the natural parameters (c ? 1, ?d).
3.1
Gamma NPN
The gamma distribution with support over positive values is an important member of the exponential
family. The corresponding probability density function is p(x|c, d) = ?(c)?1 dc xc?1 exp(?dx) with
(c ? 1, ?d) as its natural parameters (we use (c, d) as proxy natural parameters). If we assume
gamma distributions for W(l) , b(l) , o(l) , and a(l) , an AE formed by NPN becomes a deep and
nonlinear version of nonnegative matrix factorization [14]. To see this, note that this AE with
activation v(x) = x and zero biases b(l) is equivalent to finding a factorization of matrix X such that
QL
X = H l= L W(l) where H denotes the middle-layer neurons and W(l) has nonnegative entries
2
(l)
(l)
(l)
(l)
from gamma distributions. In this gamma NPN, parameters Wc , Wd , bc , and bd can be learned
following Algorithm 1. We detail the algorithm as follows:
Linear Transformation: Since gamma distributions are assumed here, we can use the function
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
f (c, d) = ( dc , dc2 ) to compute (Wm , Ws ) = f (Wc , Wd ), (bm , bs ) = f (bc , bd ), and
(l)
(l)
(l)
(l)
(oc , od ) = f ?1 (om , os ) during the probabilistic linear transformation in Equation (1)-(3).
Nonlinear Transformation: With the proxy natural parameters for the gamma distributions over
(l)
(l)
o(l) , the mean am and variance as for the nonlinearly transformed distribution over a(l) would
be obtained with Equation (4). Following Theorem 1, closed-form solutions are possible with
v(x) = r(1 ? exp(?? x)) (r = q and ui (x) = x) where r and ? are constants. Using this new
activation function, we have (see Section 2.1 and 6.1 of the supplementary material for details on the
function and derivation):
Z
ooc
od oc
am = po (o|oc , od )v(o)do = r(1 ? d ? ?(oc ) ? (od + ? )?oc ) = r(1 ? (
) ),
?(oc )
od + ?
od
od 2oc
as = r2 ((
)oc ? (
) ).
od + 2?
od + ?
(L)
Error: With oc
(L)
and od , we can compute the regression error E as the negative log-likelihood:
(L)
(L)
E = (log ?(o(L)
? log od
c ) ? oc
(L)
? (o(L)
? 1) ? log y + od
c
? y)1T ,
where y is the observed output corresponding to x. For classification, cross-entropy loss can be used
(l)
(l)
(l)
(l)
as E. Following the computation flow above, BP can be used to learn Wc , Wd , bc , and bd .
5
100
80
60
40
Y
20
0
?20
?40
?60
?80
?100
?6
?4
?2
0
X
2
4
6
Figure 1: Predictive distributions for PBP, BDK, dropout NN, and NPN. The shaded regions correspond to ?3 standard deviations. The black curve is the data-generating function and blue curves
show the mean of the predictive distributions. Red stars are the training data.
3.2 Gaussian NPN
Different from the gamma distribution which has support over positive values only, the Gaussian
distribution, also an exponential-family distribution, can describe real-valued random variables. This
makes it a natural choice for NPN. We refer to this NPN variant with Gaussian distributions over both
the weights and neurons as Gaussian NPN. Details of Algorithm 1 for Gaussian NPN are as follows:
Linear Transformation: Besides support over real values, another property of Gaussian distributions
is that the mean and variance can be used as proxy natural parameters, leading to an identity mapping
function f (c, d) = (c, d) which cuts the computation cost. We can use this function to compute
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(l)
(Wm , Ws ) = f (Wc , Wd ), (bm , bs ) = f (bc , bd ), and (oc , od ) = f ?1 (om , os )
during the probabilistic linear transformation in Equation (1)-(3).
1
is used, am in
Nonlinear Transformation: If the sigmoid activation v(x) = ?(x) = 1+exp(?x)
Equation (4) would be (convolution of Gaussian with sigmoid is approximated by another sigmoid):
Z
oc
(5)
am = N (o|oc , diag(od )) ? ?(o)do ? ?(
1 ),
(1 + ? 2 od ) 2
Z
?(oc + ?)
as = N (o|oc , diag(od )) ? ?(o)2 do ? a2m ? ?(
) ? a2m ,
(6)
(1 + ? 2 ?2 od )1/2
?
?
where ? = 4 ? 2 2, ? = ? log( 2 + 1), and ? 2 = ?/8. Similar approximation can be applied for
activation v(x) = tanh(x) since tanh(x) = 2?(2x) ? 1.
If the ReLU activation v(x) = max(0, x) is used, we can use the techniques in [6] to obtain the first
two moments of max(z1 , z2 ) where z1 and z2 are Gaussian random variables. Full derivation for
v(x) = ?(x), v(x) = tanh(x), and v(x) = max(0, x) is left to the supplementary material.
(L)
(L)
Error: With oc and od in the last layer, we can then compute the error E as the KL divergence
(L)
(L)
KL(N (oc , diag(od )) k N (ym , diag())), where is a vector with all entries equal to a small
1T + ( 1 )(o(L) ? y)T ? K + (log o(L) )1T ? K log ). For
value . Hence the error E = 12 ( (L)
c
(L)
d
od
od
classification tasks, cross-entropy loss is used. Following the computation flow above, BP can be
(l)
(l)
(l)
(l)
used to learn Wc , Wd , bc , and bd .
3.3
Poisson NPN
The Poisson distribution, as another member of the exponential family, is often used to model counts
(e.g., counts of words, topics, or super topics in documents). Hence for text modeling, it is natural to
assume Poisson distributions for neurons in NPN. Interestingly, this design of Poisson NPN can be
seen as a neural analogue of some Poisson factor analysis models [26].
Besides closed-form nonlinear transformation, another challenge of Poisson NPN is to map the pair
(l)
(l)
(l)
(om , os ) to the single parameter
q oc of Poisson distributions. According to the central limit theorem,
(l)
(l)
(l)
(l)
we have oc = 14 (2om ? 1 + (2om ? 1)2 + 8os ) (see Section 3 and 6.3 of the supplementary
material for proofs, justifications, and detailed derivation of Poisson NPN).
4
Experiments
In this section we evaluate variants of NPN and other state-of-the-art methods on four real-world
datasets. We use Matlab (with GPU) to implement NPN, AE variants, and the ?vanilla? NN trained
with dropout SGD (dropout NN). For other baselines, we use the Theano library [2] and MXNet [5].
6
Table 2: Test Error Rates on MNIST
Method
Error
BDK
1.38%
BBB
1.34%
Dropout1
1.33%
Dropout2
1.40%
gamma NPN
1.27%
Gaussian NPN
1.25%
Table 3: Test Error Rates for Different Size of Training Data
Size
NPN
Dropout
BDK
4.1
100
29.97%
32.58%
30.08%
500
13.79%
15.39%
14.34%
2,000
7.89%
8.78%
8.31%
10,000
3.28%
3.53%
3.55%
Toy Regression Task
To gain some insights into NPN, we start with a toy 1d regression task so that the predicted mean and
variance can be visualized. Following [1], we generate 20 points in one dimension from a uniform
distribution in the interval [?4, 4]. The target outputs are sampled from the function y = x3 + n ,
where n ? N (0, 9). We fit the data with the Gaussian NPN, BDK, and PBP (see the supplementary
material for detailed hyperparameters). Figure 1 shows the predicted mean and variance of NPN,
BDK, and PBP along with the mean provided by the dropout NN (for larger versions of figures please
refer to the end of the supplementary materials). As we can see, the variance of PBP, BDK, and NPN
diverges as x is farther away from the training data. Both NPN?s and BDK?s predictive distributions
are accurate enough to keep most of the y = x3 curve inside the shaded regions with relatively low
variance. An interesting observation is that the training data points become more scattered when
x > 0. Ideally, the variance should start diverging from x = 0, which is what happens in NPN.
However, PBP and BDK are not sensitive enough to capture this dispersion change. In another dataset,
Boston Housing, the root mean square error for PBP, BDK, and NPN is 3.01, 2.82, and 2.57.
4.2
MNIST Classification
The MNIST digit dataset consists of 60,000 training images and 10,000 test images. All images
are labeled as one of the 10 digits. We train the models with 50,000 images and use 10,000 images
for validation. Networks with a structure of 784-800-800-10 are used for all methods, since 800
works best for the dropout NN (denoted as Dropout1 in Table 2) and BDK (BDK with a structure of
784-400-400-10 achieves an error rate of 1.41%). We also try the dropout NN with twice the number
of hidden neurons (Dropout2 in Table 2) for fair comparison. For BBB, we directly quote their results
from [4]. We implement BDK and NPN using the same hyperparameters as in [1] whenever possible.
Gaussian priors are used for NPN (see the supplementary material for detailed hyperparameters).
1
0.8
Accuracy
As shown in Table 2, BDK and BBB achieve comparable performance
with dropout NN (similar to [1], PBP is not included in the comparison
since it supports regression only), and gamma NPN slightly outperforms
dropout NN. Gaussian NPN is able to achieve a lower error rate of
1.25%. Note that BBB with Gaussian priors can only achieve an error
rate of 1.82%; 1.34% is the result of using Gaussian mixture priors. For
reference, the error rate for dropout NN with 1600 neurons in each hidden
layer is 1.40%. The time cost per epoch is 18.3s, 16.2s, and 6.4s for NPN,
BDK, NN respectively. Note that BDK is in C++ and NPN is in Matlab.
0.6
0.4
0.2
0
1
2
3
4 5 6
Variance
7
8
9
Figure 2: Classification accuracy
for different variance (uncertainty).
Note that ?1? in the x-axis means
as(L) 1T ? [0, 0.04), ?2? means
as(L) 1T ? [0.04, 0.08), etc.
To evaluate NPN?s ability as a Bayesian treatment to avoid overfitting,
we vary the size of the training set (from 100 to 10,000 data points) and compare the test error rates.
As shown in Table 3, the margin between the Gaussian NPN and dropout NN increases as the training
set shrinks. Besides, to verify the effectiveness of the estimated uncertainty, we split the test set into
(L)
9 subsets according NPN?s estimated variance (uncertainty) as 1T for each sample and show the
accuracy for each subset in Figure 2. We can find that the more uncertain NPN is, the lower the
accuracy, indicating that the estimated uncertainty is well calibrated.
4.3
Second-Order Representation Learning
Besides classification and regression, we also consider the problem of unsupervised representation
learning with a subsequent link prediction task. Three real-world datasets, Citeulike-a, Citeulike-t,
and arXiv, are used. The first two datasets are from [22, 23], collected separately from CiteULike in
different ways to mimic different real-world settings. The third one is from arXiv as one of the SNAP
datasets [15]. Citeulike-a consists of 16,980 documents, 8,000 terms, and 44,709 links (citations).
7
Table 4: Link Rank on Three Datasets
Method
Citeulike-a
Citeulike-t
arXiv
SAE
1104.7
2109.8
4232.7
SDAE
992.4
1356.8
2916.1
VAE
980.8
1599.6
3367.2
gamma NPN
851.7 (935.8)
1342.3 (1400.7)
2796.4 (3038.8)
Gaussian NPN
750.6 (823.9)
1280.4 (1330.7)
2687.9 (2923.8)
Poisson NPN
690.9 (5389.7)
1354.1 (9117.2)
2684.1 (10791.3)
Citeulike-t consists of 25,975 documents, 20,000 terms, and 32,565 links. The last dataset, arXiv,
consists of 27,770 documents, 8,000 terms, and 352,807 links.
The task is to perform unsupervised representation learning before feeding the extracted representations (middle-layer neurons) into a Bayesian LR algorithm [3]. We use the stacked autoencoder (SAE)
[7], stacked denoising autoencoder (SDAE) [21], variational autoencoder (VAE) [13] as baselines
(hyperparameters like weight decay and dropout rate are chosen by cross validation). As in SAE,
we use different variants of NPN to form autoencoders where both the input and output targets are
bag-of-words (BOW) vectors for the documents. The network structure for all models is B-100-50
(B is the number of terms). Please refer to the supplementary material for detailed hyperparameters.
350
300
Reconstruction error
One major advantage of NPN over SAE and SDAE is that the learned representations are distributions instead of point estimates. Since representations
from NPN contain both the mean and variance, we call them secondorder representations. Note that although VAE also produces second-order
representations, the variance part is simply parameterized by multilayer
perceptrons while NPN?s variance is naturally computed through propagation of distributions. These 50-dimensional representations with both mean
and variance are fed into a Bayesian LR algorithm for link prediction (for
deterministic AE the variance is set to 0).
[!h]
250
200
150
100
50
0
0
5
10
Variance
15
20
Figure 3: Reconstruction error
and estimated uncertainty for
each data point in Citeulike-a.
We use links among 80% of the nodes (documents) to train the Bayesian LR and use other links as
the test set. link rank and AUC (area under the ROC curve) are used as evaluation metrics. The link
rank is the average rank of the observed links from test nodes to training nodes. We compute the
AUC for every test node and report the average values. By definition, lower link rank and higher
AUC indicate better predictive performance and imply more powerful representations.
Table 4 shows the link rank for different models. For fair comparison we also try all baselines with
double budget (a structure of B-200-50) and report whichever has higher accuracy. As we can see, by
treating representations as distributions rather than points in a vector space, NPN is able to achieve
much lower link rank than all baselines, including VAE with variance information. The numbers in
the brackets show the link rank of NPN if we discard the variance information. The performance
gain from variance information verifies the effectiveness of the variance (uncertainty) estimated by
NPN. Among different variants of NPN, the Gaussian NPN seems to perform better in datasets with
fewer words like Citeulike-t (only 18.8 words per document). The Poisson NPN, as a more natural
choice to model text, achieves the best performance in datasets with more words (Citeulike-a and
arXiv). The performance in AUC is consistent with that in terms of the link rank (see Section 4 of the
supplementary material). To further verify the effectiveness of the estimated uncertainty, we plot the
(L)
reconstruction error and the variance os 1T for each data point of Citeulike-a in Figure 3. As we
can see, higher uncertainty often indicates not only higher reconstruction error E but also higher
variance in E.
5
Conclusion
We have introduced a family of models, called natural-parameter networks, as a novel class of probabilistic NN to combine the merits of NN and PGM. NPN regards the weights and neurons as arbitrary
exponential-family distributions rather than just point estimates or factorized Gaussian distributions.
Such flexibility enables richer descriptions of hierarchical relationships among latent variables and
adds another degree of freedom to customize NN for different types of data. Efficient sampling-free
backpropagation-compatible algorithms are designed for the learning of NPN. Experiments show that
NPN achieves state-of-the-art performance on classification, regression, and representation learning
tasks. As possible extensions of NPN, it would be interesting to connect NPN to arbitrary PGM to
form fully Bayesian deep learning models [24, 25], allowing even richer descriptions of relationships
among latent variables. It is also worth noting that NPN cannot be defined as generative models
and, unlike PGM, the same NPN model cannot be used to support multiple types of inference (with
different observed and hidden variables). We will try to address these limitations in our future work.
8
References
[1] A. K. Balan, V. Rathod, K. P. Murphy, and M. Welling. Bayesian dark knowledge. In NIPS, 2015.
[2] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio.
Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS
2012 Workshop, 2012.
[3] C. M. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., Secaucus, NJ,
USA, 2006.
[4] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural network. In
ICML, 2015.
[5] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. CoRR, abs/1512.01274,
2015.
[6] C. E. Clark. The greatest of a finite set of random variables. Operations Research, 9(2):145?162, 1961.
[7] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. Book in preparation for MIT Press, 2016.
[8] A. Graves. Practical variational inference for neural networks. In NIPS, 2011.
[9] R. Henao, Z. Gan, J. Lu, and L. Carin. Deep poisson factor modeling. In NIPS, 2015.
[10] J. M. Hern?ndez-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of Bayesian
neural networks. In ICML, 2015.
[11] G. E. Hinton and D. Van Camp. Keeping the neural networks simple by minimizing the description length
of the weights. In COLT, 1993.
[12] A. Karpathy and F. Li. Deep visual-semantic alignments for generating image descriptions. In CVPR,
2015.
[13] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. CoRR, abs/1312.6114, 2013.
[14] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In NIPS, 2001.
[15] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.
stanford.edu/data, June 2014.
[16] J. MacKay David. A practical Bayesian framework for backprop networks. Neural computation, 1992.
[17] R. M. Neal. Learning stochastic feedforward networks. Department of Computer Science, University of
Toronto, 1990.
[18] R. M. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
[19] R. Ranganath, L. Tang, L. Charlin, and D. M. Blei. Deep exponential families. In AISTATS, 2015.
[20] R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969?978, 2009.
[21] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. JMLR, 11:3371?3408,
2010.
[22] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In KDD,
2011.
[23] H. Wang, B. Chen, and W.-J. Li. Collaborative topic regression with social regularization for tag recommendation. In IJCAI, 2013.
[24] H. Wang, N. Wang, and D. Yeung. Collaborative deep learning for recommender systems. In KDD, 2015.
[25] H. Wang and D. Yeung. Towards Bayesian deep learning: A framework and some existing methods. TKDE,
2016, to appear.
[26] M. Zhou, L. Hannah, D. B. Dunson, and L. Carin. Beta-negative binomial process and poisson factor
analysis. In AISTATS, 2012.
9
| 6279 |@word kong:1 middle:2 version:5 seems:1 d2:1 sgd:2 moment:5 ndez:1 lightweight:1 bc:8 document:7 interestingly:1 outperforms:1 existing:1 wd:12 z2:3 od:29 activation:13 tackling:1 dx:7 ust:1 readily:1 bd:8 gpu:1 citeulike:11 subsequent:2 kdd:2 shape:1 enables:2 designed:3 drop:1 update:1 depict:1 treating:1 plot:1 generative:2 fewer:1 isotropic:1 farther:1 lr:3 blei:2 pascanu:1 cse:1 node:4 toronto:2 simpler:1 zhang:2 wierstra:1 along:1 beta:3 become:1 consists:5 combine:2 inside:1 introduce:2 themselves:1 salakhutdinov:1 automatically:1 increasing:1 becomes:2 spain:1 provided:2 notation:1 bounded:1 factorized:5 what:2 finding:1 dubbed:1 transformation:19 nj:1 every:1 shed:1 um:1 appear:1 producing:2 positive:2 before:4 local:1 limit:1 encoding:1 subscript:4 black:1 twice:2 resembles:1 shaded:2 limited:1 factorization:4 adoption:1 practical:4 practice:1 implement:2 backpropagation:8 x3:2 xshiab:1 ooc:1 digit:2 area:1 yan:1 matching:2 convenient:1 word:7 integrating:1 refers:1 bergeron:1 get:3 convenience:1 cannot:2 equivalent:1 deterministic:2 map:2 shi:1 imposed:1 lobato:1 go:3 simplicity:1 insight:1 lamblin:1 reparameterization:1 justification:1 laplace:1 target:6 play:1 goodfellow:2 secondorder:1 trick:1 element:4 pa:3 approximated:1 recognition:1 cut:1 labeled:1 ep:2 role:1 observed:3 wang:8 capture:1 region:2 decrease:1 valuable:1 principled:1 mentioned:2 rq:1 ui:4 ideally:1 seung:1 dynamic:1 trained:3 predictive:5 division:1 efficiency:2 exponentialfamily:2 po:9 various:2 derivation:3 stacked:4 train:2 effective:1 shortcoming:1 monte:3 describe:1 richer:2 widely:1 supplementary:9 valued:1 larger:1 snap:3 cvpr:1 stanford:2 ability:1 statistic:1 bdk:16 superscript:1 online:2 housing:1 advantage:1 propose:3 reconstruction:4 product:1 bow:1 flexibility:6 achieve:6 forth:1 description:4 secaucus:1 scalability:2 ijcai:1 double:1 diverges:1 produce:1 generating:2 adam:1 ac:4 ij:4 predicted:2 resemble:1 come:1 indicate:1 convention:1 larochelle:1 direction:1 drawback:1 stochastic:3 subsequently:1 material:9 backprop:2 require:1 feeding:1 alleviate:1 preliminary:1 krevl:1 adjusted:1 extension:2 strictly:1 sgld:1 exp:17 mapping:1 major:2 achieves:3 vary:1 early:1 bag:1 label:1 tanh:5 odo:1 quote:1 sensitive:1 mit:1 gaussian:27 super:1 rather:4 avoid:1 zhou:1 vae:4 june:1 improvement:1 bernoulli:1 likelihood:2 rank:9 indicates:1 hk:1 normalizer:1 baseline:4 am:11 sense:2 helpful:1 inference:6 camp:1 lowercase:1 nn:30 w:7 hidden:4 wij:1 transformed:3 henao:1 canceled:1 classification:7 flexible:2 among:5 denoted:1 colt:1 multiplies:1 art:6 special:1 mackay:1 marginal:1 equal:1 once:1 sampling:5 identical:1 represents:1 unsupervised:4 icml:2 carin:2 mimic:1 future:1 report:2 gamma:20 divergence:2 murphy:1 ab:2 freedom:1 evaluation:1 alignment:1 mixture:1 bracket:2 bbb:5 light:1 uppercase:1 chain:3 accurate:1 integral:2 byproduct:3 partial:1 logarithm:1 leskovec:1 uncertain:1 earlier:1 modeling:3 cost:3 deviation:1 entry:3 subset:2 uniform:1 a2m:3 connect:1 teacher:1 calibrated:1 thanks:1 density:3 probabilistic:12 lee:1 ym:1 thesis:1 central:1 choose:1 book:1 inefficient:1 derivative:1 leading:1 toy:2 closedform:1 li:4 potential:1 star:1 student:1 bergstra:1 int:1 inc:1 explicitly:1 vi:3 ad:5 performed:1 try:4 tion:1 closed:5 root:1 red:1 wm:8 bayes:2 recover:1 start:2 bouchard:1 contribution:1 om:7 square:2 formed:1 accuracy:7 collaborative:3 variance:28 efficiently:3 correspond:3 bayesian:20 vincent:1 kavukcuoglu:1 lu:1 carlo:3 worth:1 sae:4 whenever:1 definition:1 naturally:2 associated:3 proof:3 sampled:2 gain:2 dataset:4 treatment:5 knowledge:2 actually:1 back:2 higher:6 hashing:1 done:2 though:1 shrink:1 charlin:1 just:1 autoencoders:3 nonlinear:16 o:8 lack:2 propagation:3 scientific:1 usage:1 effect:1 facilitate:1 verify:2 contain:1 usa:1 counterpart:1 regularization:3 hence:3 semantic:2 bnn:5 neal:2 during:5 please:2 auc:4 customize:4 oc:30 hong:1 criterion:1 bijective:1 demonstrate:1 mxnet:2 reasoning:1 ranging:1 variational:5 wise:4 novel:2 recently:3 pbp:8 meaning:1 common:1 sigmoid:6 image:6 refer:3 xingjian:1 approx:1 vanilla:3 similarly:5 language:1 etc:1 add:2 multivariate:1 posterior:1 recent:2 dyyeung:1 discard:1 verlag:1 yi:3 seen:1 full:1 desirable:1 multiple:1 infer:1 match:2 offer:1 cross:3 lin:1 prediction:4 variant:8 regression:8 scalable:1 multilayer:1 ae:6 metric:1 vision:1 expectation:1 poisson:16 yeung:3 bz:1 sometimes:1 iteration:1 arxiv:5 achieved:2 separately:1 addressed:1 underestimated:1 interval:1 operate:1 unlike:4 member:2 leveraging:2 flow:2 seem:1 effectiveness:4 call:1 noting:1 intermediate:3 feedforward:4 enough:2 npn:90 split:1 bengio:3 relu:2 fit:1 blundell:1 motivated:1 suffer:1 algebraic:1 york:1 matlab:2 deep:18 useful:2 detailed:5 involve:1 factorial:1 karpathy:1 dark:2 visualized:1 dit:1 generate:1 http:1 exist:2 estimated:6 per:2 blue:1 tkde:1 promise:1 key:1 four:1 clarity:2 inverse:2 letter:3 uncertainty:13 parameterized:1 powerful:1 family:18 comparable:1 dropout:12 layer:18 bound:1 courville:1 nonnegative:4 pgm:13 bp:3 x2:2 tag:1 wc:12 u1:1 speed:1 relatively:2 structured:1 department:1 according:6 poor:1 remain:1 slightly:1 b:5 happens:1 theano:2 computationally:2 equation:9 remains:1 hern:1 count:4 interger:1 needed:1 merit:2 fed:1 whichever:1 end:6 adopted:1 available:1 operation:2 incurring:1 apply:1 hierarchical:1 away:1 original:1 denotes:5 binomial:1 include:1 gan:1 graphical:2 xc:3 exploit:1 especially:2 costly:1 traditional:2 gradient:3 link:17 concatenation:1 lajoie:1 topic:4 collected:1 boldface:3 besides:5 length:1 modeled:1 relationship:3 insufficient:2 providing:1 minimizing:1 manzagol:1 ql:1 unfortunately:2 dunson:1 hao:1 negative:3 design:2 perform:2 allowing:2 recommender:1 neuron:22 observation:2 datasets:10 markov:3 convolution:1 dispersion:1 finite:1 descent:1 langevin:1 hinton:2 dc:3 stack:1 arbitrary:4 introduced:1 david:1 pair:5 nonlinearly:1 specified:1 kl:3 z1:3 learned:7 barcelona:1 kingma:1 nip:6 address:3 able:2 below:1 pattern:1 challenge:2 max:4 including:1 cornebise:1 belief:1 analogue:2 greatest:1 natural:35 technology:1 library:2 imply:1 axis:1 autoencoder:3 auto:1 text:3 prior:8 epoch:1 rathod:1 graf:1 loss:2 fully:1 interesting:3 limitation:1 clark:1 validation:2 degree:1 sufficient:2 proxy:5 consistent:1 propagates:1 xiao:1 article:1 corrupting:1 story:1 row:1 prone:1 compatible:1 summary:1 balan:1 last:3 free:3 keeping:1 bias:1 deeper:1 distributed:1 regard:1 curve:4 dimension:1 van:1 world:6 collection:1 bm:5 dc2:1 welling:2 qx:1 ranganath:1 social:1 approximate:3 obtains:1 citation:1 implicitly:1 keep:1 overfitting:4 active:1 assumed:2 recommending:1 xi:3 latent:2 table:10 learn:5 diag:4 aistats:2 main:1 linearly:1 noise:1 hyperparameters:5 verifies:1 fair:2 xu:1 referred:1 sdae:3 scattered:1 fashion:1 roc:1 deterministically:1 exponential:17 jmlr:1 third:1 learns:1 hwangaz:1 tang:1 theorem:4 hannah:1 bastien:1 bishop:1 r2:2 decay:2 workshop:1 mnist:3 corr:2 phd:1 budget:1 margin:1 heterogeneous:1 chen:2 boston:1 entropy:2 cx:1 rayleigh:1 simply:1 univariate:2 visual:1 expressed:2 ters:1 u2:1 recommendation:1 springer:1 corresponds:2 extracted:1 viewed:1 identity:1 towards:1 change:1 included:1 specifically:2 denoising:4 called:3 diverging:1 perceptrons:1 indicating:1 support:6 outstanding:1 preparation:1 evaluate:2 |
5,835 | 628 | Unsupervised Discrimination of Clustered Data
via Optimization of Binary Information Gain
Nicol N. Schraudolph
Computer Science & Engr. Dept.
University of California, San Diego
La Jolla, CA 92093-0114
Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
San Diego, CA 92186-5800
[email protected]
[email protected]
Abstract
We present the information-theoretic derivation of a learning algorithm
that clusters unlabelled data with linear discriminants. In contrast to
methods that try to preserve information about the input patterns, we
maximize the information gained from observing the output of robust
binary discriminators implemented with sigmoid nodes. We deri ve a local
weight adaptation rule via gradient ascent in this objective, demonstrate
its dynamics on some simple data sets, relate our approach to previous
work and suggest directions in which it may be extended.
1
INTRODUCTION
Unsupervised learning algorithms may perform useful preprocessing functions by preserving some aspects of their input while discarding others. This can be quantified as
maximization of the information the network's output carries about those aspects of the
input that are deemed important.
(Linsker, 1988) suggests maximal preservation of information about all aspects of the input.
This In/omax principle provides for optimal reconstruction of the input in the face of noise
and resource limitations. The I-max algorithm (Becker and Hinton, 1992), by contrast,
focusses on coherent aspects of the input, which are extracted by maximizing the mutual
information between networks looking at different patches of input.
Our work aims at recoding clustered data with adaptive discriminants that selectively
emphasize gaps between clusters while collapsing patterns within a cluster onto near499
500
Schraudolph and Sejnowski
identical output representations. We achieve this by maximizing in/ormation gain the information gained through observation of the network's outputs under a probabilistic
in terpretati on.
2
STRATEGY
Consider a node that performs a weighted summation on its inputs i and squashes the
resulting net input y through a sigmoid function f :
z
= f(y),
where f(y)
= 1 +le_ Y
and y = tV ? i .
(1)
Such a sigmoid node can be regarded as a "soft" discriminant: with a large enough weight
vector, the output will essentially be binary, but smaller weights allow for the expression of
varying degrees of confidence in the discrimination.
To make this notion more precise, consider y a random variable with bimodal distribution,
namely an even mixture of two Gaussian distributions. Then if their means equal ? half
their variance, z is the posterior probability for discriminating between the two source
distributions (Anderson, 1972).
This probabilitstic interpretation of z can be used to design a learning algorithm that seeks
such bimodal projections of the input data. In particular, we search for highly informative discriminants by maximizing the information gained about the binary discrimination
through observation of z. This binary in/ormation gain is given by
dH(z) = H(i) - H(z),
(2)
where H (z) is the entropy of z under the above interpretation, and i is an estimate of z
based on prior knowledge.
3
RESULTS
3.1
THE ALGORITHM
In the Appendix, we present the derivation of a learning algorithm that maximizes binary
information gain by gradient ascent. The resulting weight update rule is
dw
0(
f'(y) i (y - fI),
(3)
where fI, the estimated net input, must meet certain conditions 1 (see Appendix). The weight
change dictated by (3) is thus proportional to the product of three factors:
? the derivative of the Sigmoid squashing function,
? the presynaptic input i, and
? the difference between actual and anticipated net input.
1
In what follows, we have successfully used estimators that merely approximate these conditions.
Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain
Ay
=
iy AH(z)
l.00-----+------t-------j---------jf-------I-----
==::f;~~~i~~~~$~~~~,~~~~?::=
0.00-_ _ _. _
0.50
-0.50
-1.00 - - - - - + - - - - - - t - - - - - - - j - - - - - - - - - j f - - - - - - - I - - - - -
-4.00
-2.00
0.00
2.00
4.00
y
Figure I: Phase plot of ll.y against net input y for y = {-3, -2, ... 3}. See text for details.
3.2
SINGLE NODE DYNAMICS
For a single, isolated node, we use (y), the average net input over a batch of input patterns,
as estimator for y. The behavior of our algorithm in this setting is best understood from
a phase plot as shown in Figure I, where the change in net input resulting from a weight
change according to (3) is graphed against the net input that causes this weight change.
Curves are plotted for seven different values of y. The central curve (y = 0) is identical
to that of the straightforward Hebb rule for sigmoid nodes: both positive and negative net
inputs are equally amplified until they reach saturation. For non-zero values of y, however,
the curves become asymmetric: positive y favor negative changes ll.y and vice versa. For
y = (y), it is easy to see that this will have the effect of centering net inputs around zero.
The node will therefore converge to a state where its output is one for half of the input
patterns, and zero for the other half. Note that this can be achieved by any sufficiently large
weight vector, regardless of its direction! However, since simple gradient ascent is both
greedy and local in weight space, starting it from small random initial weights is equivalent
to a bias towards discriminations that can be made confidently with smaller weight vectors.
To illustrate this effect, we have tested a single node running our algorithm on a set of
vowel formant frequency data due to (Peterson and Barney, 1952). The most prominent
feature of this data is a central gap that separates front from back vowels; however, this
feature is near-orthogonal to the principal component of the data and thus escapes detection
by standard Hebbian learning rules.
Figure 2 shows the initial, intermediate and final phase of this experiment, using a visualization technique suggested by (Munro, 1992). Each plot shows the pre-image of zero
net input superimposed on a scatter plot of the data set in input space. The two flanking
lines delineate the "active region" where the sigmoid is not saturated, and thus provide an
indication of weight vector size.
As demonstrated in this figure, our algorithm is capable of proceeding smoothly from a
small initial weight vector that responds in principal component direction to a solution
which uses a large weight vector in near-orthogonal direction to successfully discriminate
between the two data clusters.
501
502
Schraudolph and Sejnowski
1
;
.. ...
i.
. ..
, '
.
'
.,.
"
.
. , ..
;#~~~~}t""
. ,:' ...~~.~ r??
. .<:/0'
Figure 2: Single node discovers distinction between front and back vowels in unlabelled data
set of 1514 multi-speaker vowel utterances (Peterson and Barney, 1952). Superimposed on
a scatter plot of the data are the pre-images of Y = 0 (solid center line) and Y = ?1,31696
(flanking lines) in input space. Discovered feature is far from principal component direction.
3.3 EXTENSION TO A LAYER OF NODES
A learning algorithm for a single sigmoid node has of course only limited utility. When
extending it to a layer of such nodes, some form oflateral interaction is needed to ensure that
each node makes a different binary discrimination. The common technique of introducing
lateral competition for activity or weight changes would achieve this only at the cost of
severely distorting the behavior of our algorithm.
Fortunately our framework is flexible enough to accommodate lateral differentiation in a
less intrusive manner: by picking an estimator that uses the activity of every other node in
the layer to make its prediction, we force each node to maximize its information gain with
respect to the entire layer. To demonstrate this technique we use the linear second-order
estimator
(4)
Yi = (Yi) + (Yj - (Yj)) (}ij
L
j#-i
to predict the net input Yi of the ith node in the layer, where the (.) operator denotes
averaging over a batch of input patterns, and {}ij is the empirical correlation coefficient
(5)
Figure 3 shows a layer of three such nodes adapting to a mixture of three Gaussian distributions, with each node initially picking a different Gaussian to separate from the other
two. After some time, all three discriminants rotate in concert so as to further maximize
information gain by splitting the input data evenly. Note that throughout this process, the
nodes always remain well-differentiated from each other.
For most initial conditions, however, the course of this experiment is that depicted in
Figure 4: two nodes discover a more efficient way to discriminate between the three input
clusters, to the detriment of the third. The latecomer repeatedly tries to settle into one of
the gaps in the data, but this would result in a high degree of predictability. Thus the node
with the shortest weight vector and hence most volatile discriminant is weakened further,
its weight vector all but eliminated in an effective demonstration of Occam's razor.
Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain
Figure 3: Layer of three nodes adapts to a mixture of three Gaussian distributions. In the
final state, each node splits the input data evenly.
;
/
Figure 4: Most initial conditions, however, lead to a minimal solution involving only two
nodes. The weakest node is "crowded out" by Occam's razor, its weight vector reduced to
near-zero length.
4
4.1
DISCUSSION
RELATED WORK
By maximizing the difference of actual from anticipated response, our algorithm makes
binary discriminations that are highly informative with respect to clusters in the input. The
weight change in proportion to a difference in acti vity is reminiscent of the covariance rule
(Sejnowski, 1977) but generalizes it in two important respects:
? it explicitly incorporates a sigmoid nonlinearity, and
? fj need not necessarily be the average net input.
Both of these are critical improvements: the first allows the node to respond only to inputs in
its non-saturated region, and hence to learn local features in projections other than along the
principal component direction. The second provides a convenient mechanism for extending
the algorithm by incorporating additional information in the estimator.
We share the goal of seeking highly informative, bimodal projections of the input with the
Bienenstock-Cooper-Munro (BCM) algorithm (Bienenstock et al., 1982; Intrator, 1992).
A critical difference, however, is that BCM uses a complex, asymmetric nonlinearity
that increases the selectivity of nodes and hence produces a localized, l-of-n recoding
of the input, whereas our algorithm makes symmetric, robust and independent binary
discriminations.
503
504
Schraudolph and Sejnowski
4.2 FUTURE DIRECTIONS
Since the learning algorithm described here has demonstrated flexibility and efficiency in our
initial experiments, we plan to scale it up to address high-dimensional, real-world problems.
The algorithm itself is likely to be further extended and improved as its applications grow
more demanding.
For instance, although the size of the weight vector represents commitment to a discriminant
in our framework, it is not explicitly controlled. The dynamics of weight adaptation happen
to implement a reasonable bias in this case, but further refinements may be possible. Other
priors implicit in our approach - such as the preference for splitting the data evenly could be similarly relaxed or modified.
Another attractive generalization of this learning rule would be to implement nonlinear
discriminants by backpropagating weight derivatives through hidden units. The dynamic
stability of our algorithm is a Significant asset for its expansion into an efficient unsupervised
multi-layer network.
In such a network, linear estimators are no longer sufficient to fully remove redundancy between nodes. In his closely related predictability minimization architecture, (Schmidhuber,
1992) uses backpropagation networks as nonlinear estimators for this purpose with some
success.
Since the notion of estimator in our framework is completely general, it may combine
evidence from multiple, disparate sources. Thus a network running our algorithm can
be trained to complement a heterogeneous mix of pattern recognition methods by maximizing information gain relative to an estimator that utilizes all such available sources of
information. This flexibility should greatly aid the integration of binary information gain
optimization into existing techniques.
APPENDIX: MATHEMATICAL DERIVATION
We derive a straightforward batch learning algorithm that performs gradient ascent in the
binary information gain objective. On-line approximations may be obtained by using
exponential traces in place of the batch averages denoted by the (.) operator.
CONDITIONS ON THE ESTIMATOR
To eliminate the deri vati ve term from ( 11 d) below we require that the estimator i be
? unbiased: (i)
? honest:
= (z),and
tzi = tz (i) .
The honesty condition ensures that the estimator has access to the estimated variable only
on the slow timescale of batch averaging, thus eliminating trivial "solutions" such as i = z.
For an unbiased and honest estimator,
oi
oz = oz0 (i) = oz0 (z) = (oz)
oz = 1.
(6)
Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain
BINARY ENTROPY AND ITS DERIVATIVE
The enuopy of a binary random variable X as a function of z
= Pr( X = 1) is given by
H(z) = -zlogz - (1- z)log(l- z);
(7)
its derivative with respect to z is
o
oz H(z)
= log(l -
(8)
z) -log z.
Since z in our case is produced by the sigmoid function
simplifies to
f
given in (I), this conveniently
o
-H(z) = -yo
oz
(9)
GRADIENT ASCENT IN INFORMATION GAIN
The information dH gained from observing the output z of the discriminator is
dH(z)
= H(i) -
H(z),
(10)
z
where is an estimate of z based on prior knowledge. We maximize dH(z) by batched
gradient ascent in weight space:
dill
ex
(o~ dH(Z?)
(Ila)
(0: .~
[H(i) - H(Z?))
(lIb)
( z (1 - z)
:~ [~ ~ .
ow oz
( z (1 - z) i
:i
H (i) -
(Y - ~! .y) ) ,
:Z H (z )1)
(llc)
(lId)
where estimation of the node's output z has been replaced by that of its net input y.
Substitution of (6) into (lId) yields the binary information gain optimization rule
dill ex (z (1 - z) i(y - y?).
(12)
?
Acknowledgements
We would like to thank Steve Nowlan, Peter Dayan and Rich Zemel for stimulating and
helpful discussions. This work was supported by the Office of Naval Research and the
McDonnell-Pew Center for Cognitive Neuroscience at San Diego.
505
506
Schraudolph and Sejnowski
References
Anderson, 1. (1972). Logistic discrimination. Biometrika, 59:19-35.
Anderson, 1. and Rosenfeld, E., editors (1988). Neurocomputing: Foundations ofResearch.
MIT Press, Cambridge.
Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers
surfaces in random-dot stereograms. Nature, 355: 161-163.
Bienenstock, E., Cooper, L., and Munro, P. (1982). Theory for the development of neuron
selectivity: Orientation specificity and binocular interaction in visual cortex. Journal
of Neuroscience, 2. Reprinted in (Anderson and Rosenfeld, 1988).
Intrator, N. (1992). Feature extraction using an unsupervised neural network. Neural
Computation, 4:98-107.
Linsker, R. (1988). Self-organization in a perceptual network. Computer, pages 105-117.
Munro, P. W. (1992). Visualizations of 2-d hidden unit space. In International Joint
Conference on Neural Networks, volume 3, pages 468-473, Baltimore 1992. IEEE.
Peterson, G. E. and Barney, H. L. (1952). Control methods used in a study of the vowels.
Journal of the Acoustical Society of America, 24: 175-184.
Schmidhuber, 1. (1992). Learning factorial codes by predictability minimization. Neural
Computation, 4:863-879.
Sejnowski, T. 1. (1977). Storing covariance with nonlinearly interacting neurons. Journal
Of Mathematical Biology, 4:303-321.
| 628 |@word eliminating:1 proportion:1 seek:1 covariance:2 solid:1 accommodate:1 tsejnowski:1 barney:3 carry:1 initial:6 substitution:1 existing:1 nowlan:1 scatter:2 must:1 reminiscent:1 happen:1 informative:3 remove:1 plot:5 concert:1 update:1 discrimination:11 half:3 greedy:1 ith:1 provides:2 node:29 preference:1 mathematical:2 along:1 become:1 acti:1 combine:1 manner:1 behavior:2 multi:2 actual:2 lib:1 discover:1 maximizes:1 what:1 differentiation:1 every:1 biometrika:1 control:1 unit:2 positive:2 understood:1 local:3 severely:1 meet:1 weakened:1 quantified:1 suggests:1 limited:1 yj:2 implement:2 backpropagation:1 empirical:1 adapting:1 projection:3 convenient:1 confidence:1 pre:2 specificity:1 suggest:1 ila:1 onto:1 operator:2 equivalent:1 demonstrated:2 center:2 maximizing:5 straightforward:2 regardless:1 starting:1 splitting:2 rule:7 estimator:13 regarded:1 his:1 dw:1 stability:1 notion:2 diego:3 us:4 recognition:1 asymmetric:2 region:2 ensures:1 ormation:2 stereograms:1 dynamic:4 engr:1 trained:1 efficiency:1 completely:1 joint:1 america:1 derivation:3 effective:1 sejnowski:7 zemel:1 squash:1 favor:1 formant:1 timescale:1 rosenfeld:2 itself:1 final:2 indication:1 oflateral:1 net:13 reconstruction:1 interaction:2 maximal:1 product:1 adaptation:2 commitment:1 organizing:1 flexibility:2 achieve:2 amplified:1 adapts:1 oz:6 competition:1 cluster:6 extending:2 produce:1 illustrate:1 derive:1 ij:2 implemented:1 c:1 direction:7 closely:1 settle:1 require:1 clustered:5 generalization:1 dill:2 biological:1 summation:1 extension:1 around:1 sufficiently:1 predict:1 purpose:1 estimation:1 vice:1 successfully:2 weighted:1 minimization:2 mit:1 gaussian:4 always:1 aim:1 modified:1 varying:1 office:1 focus:1 yo:1 naval:1 improvement:1 superimposed:2 greatly:1 contrast:2 helpful:1 dayan:1 entire:1 eliminate:1 initially:1 bienenstock:3 hidden:2 flexible:1 orientation:1 denoted:1 development:1 plan:1 integration:1 mutual:1 equal:1 extraction:1 eliminated:1 identical:2 represents:1 biology:1 unsupervised:7 anticipated:2 linsker:2 future:1 others:1 escape:1 preserve:1 ve:2 neurocomputing:1 replaced:1 phase:3 vowel:5 detection:1 organization:1 highly:3 saturated:2 mixture:3 capable:1 orthogonal:2 plotted:1 isolated:1 minimal:1 instance:1 soft:1 maximization:1 cost:1 introducing:1 front:2 international:1 discriminating:1 probabilistic:1 terrence:1 picking:2 iy:1 central:2 collapsing:1 tz:1 cognitive:1 derivative:4 coefficient:1 crowded:1 explicitly:2 try:2 observing:2 oi:1 variance:1 yield:1 produced:1 asset:1 ah:1 reach:1 centering:1 against:2 frequency:1 gain:14 knowledge:2 back:2 steve:1 response:1 improved:1 delineate:1 anderson:4 implicit:1 binocular:1 until:1 correlation:1 nonlinear:2 logistic:1 graphed:1 effect:2 deri:2 unbiased:2 hence:3 symmetric:1 laboratory:1 attractive:1 ll:2 self:2 omax:1 speaker:1 razor:2 backpropagating:1 prominent:1 ay:1 theoretic:1 demonstrate:2 performs:2 fj:1 image:2 discovers:2 fi:2 sigmoid:9 common:1 volatile:1 discriminants:5 volume:1 interpretation:2 significant:1 versa:1 cambridge:1 pew:1 similarly:1 nonlinearity:2 dot:1 access:1 longer:1 surface:1 cortex:1 posterior:1 dictated:1 jolla:1 schmidhuber:2 selectivity:2 certain:1 vity:1 binary:17 success:1 yi:3 preserving:1 fortunately:1 additional:1 relaxed:1 nici:1 converge:1 maximize:4 shortest:1 preservation:1 multiple:1 mix:1 hebbian:1 unlabelled:2 schraudolph:5 equally:1 controlled:1 vati:1 prediction:1 involving:1 heterogeneous:1 essentially:1 tzi:1 bimodal:3 achieved:1 whereas:1 baltimore:1 grow:1 source:3 ascent:6 incorporates:1 near:3 intermediate:1 split:1 enough:2 easy:1 architecture:1 simplifies:1 reprinted:1 honest:2 expression:1 distorting:1 munro:4 utility:1 becker:2 peter:1 cause:1 repeatedly:1 useful:1 factorial:1 reduced:1 estimated:2 neuroscience:2 redundancy:1 merely:1 respond:1 place:1 throughout:1 reasonable:1 patch:1 utilizes:1 appendix:3 layer:8 activity:2 aspect:4 tv:1 according:1 mcdonnell:1 smaller:2 remain:1 lid:2 pr:1 flanking:2 resource:1 visualization:2 mechanism:1 needed:1 generalizes:1 available:1 intrator:2 differentiated:1 batch:5 denotes:1 running:2 ensure:1 society:1 seeking:1 objective:2 strategy:1 responds:1 gradient:6 ow:1 separate:2 thank:1 lateral:2 evenly:3 seven:1 acoustical:1 presynaptic:1 discriminant:3 trivial:1 length:1 code:1 demonstration:1 detriment:1 relate:1 trace:1 negative:2 disparate:1 design:1 perform:1 observation:2 neuron:2 neurobiology:1 extended:2 precise:1 hinton:2 looking:1 discovered:1 ucsd:2 interacting:1 complement:1 namely:1 nonlinearly:1 discriminator:2 california:1 coherent:1 distinction:1 bcm:2 address:1 suggested:1 below:1 pattern:6 confidently:1 saturation:1 max:1 critical:2 demanding:1 force:1 deemed:1 utterance:1 text:1 prior:3 acknowledgement:1 nicol:1 relative:1 fully:1 limitation:1 proportional:1 intrusive:1 localized:1 foundation:1 degree:2 sufficient:1 principle:1 editor:1 storing:1 occam:2 share:1 squashing:1 course:2 supported:1 bias:2 allow:1 institute:1 peterson:3 face:1 recoding:2 curve:3 llc:1 world:1 rich:1 made:1 adaptive:1 san:3 preprocessing:1 refinement:1 far:1 approximate:1 emphasize:1 active:1 search:1 learn:1 nature:1 robust:2 ca:2 expansion:1 necessarily:1 complex:1 noise:1 batched:1 hebb:1 salk:1 predictability:3 cooper:2 aid:1 slow:1 exponential:1 perceptual:1 third:1 discarding:1 weakest:1 evidence:1 incorporating:1 gained:4 gap:3 entropy:2 smoothly:1 depicted:1 likely:1 visual:1 conveniently:1 extracted:1 dh:5 stimulating:1 goal:1 towards:1 jf:1 change:7 averaging:2 principal:4 discriminate:2 la:1 selectively:1 rotate:1 dept:1 tested:1 ex:2 |
5,836 | 6,280 | DeepMath - Deep Sequence Models for Premise
Selection
Alexander A. Alemi ?
Google Inc.
[email protected]
Geoffrey Irving ?
Google Inc.
[email protected]
Fran?ois Chollet ?
Google Inc.
[email protected]
Christian Szegedy ?
Google Inc.
[email protected]
Niklas Een ?
Google Inc.
[email protected]
Josef Urban ??
Czech Technical University in Prague
[email protected]
Abstract
We study the effectiveness of neural sequence models for premise selection in
automated theorem proving, one of the main bottlenecks in the formalization of
mathematics. We propose a two stage approach for this task that yields good
results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is
the first time deep learning has been applied to theorem proving on a large scale.
1
Introduction
Mathematics underpins all scientific disciplines. Machine learning itself rests on measure and
probability theory, calculus, linear algebra, functional analysis, and information theory. Complex
mathematics underlies computer chips, transit systems, communication systems, and financial infrastructure ? thus the correctness of many of these systems can be reduced to mathematical proofs.
Unfortunately, these correctness proofs are often impractical to produce without automation, and
present-day computers have only limited ability to assist humans in developing mathematical proofs
and formally verifying human proofs. There are two main bottlenecks: (1) lack of automated methods
for semantic or formal parsing of informal mathematical texts (autoformalization), and (2) lack of
strong automated reasoning methods to fill in the gaps in already formalized human-written proofs.
The two bottlenecks are related. Strong automated reasoning can act as a semantic filter for autoformalization, and successful autoformalization would provide a large corpus of computer-understandable
facts, proofs, and theory developments. Such a corpus would serve as both background knowledge to
fill in gaps in human-level proofs and as a training set to guide automated reasoning. Such guidance
is crucial: exhaustive deductive reasoning tools such as today?s resolution/superposition automated
theorem provers (ATPs) quickly hit combinatorial explosion, and are unusable when reasoning with a
very large number of facts without careful selection [4].
In this work, we focus on the latter bottleneck. We develop deep neural networks that learn from a
large repository of manually formalized computer-understandable proofs. We learn the task that is
essential for making today?s ATPs usable over large formal corpora: the selection of a limited number
of most relevant facts for proving a new conjecture. This is known as premise selection.
The main contributions of this work are:
?
?
Authors listed alphabetically. All contributions are considered equal.
Supported by ERC Consolidator grant nr. 649043 AI4REASON.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? A demonstration for the first time that neural network models are useful for aiding in large
scale automated logical reasoning without the need for hand-engineered features.
? The comparison of various network architectures (including convolutional, recurrent and
hybrid models) and their effect on premise selection performance.
? A method of semantic-aware ?definition?-embeddings for function symbols that improves
the generalization of formulas with symbols occurring infrequently. This model outperforms
previous approaches.
? Analysis showing that neural network based premise selection methods are complementary
to those with hand-engineered features: ensembling with previous results produce superior
results.
2
Formalization and Theorem Proving
In the last two decades, large corpora of complex mathematical knowledge have been formalized:
encoded in complete detail so that computers can fully understand the semantics of complicated
mathematical objects. The process of writing such formal and verifiable theorems, definitions, proofs,
and theories is called Interactive Theorem Proving (ITP).
The ITP field dates back to 1960s [16] and the Automath system by N.G. de Bruijn [9]. ITP systems
include HOL (Light) [15], Isabelle [37], Mizar [13], Coq [7], and ACL2 [23]. The development of
ITP has been intertwined with the development of its cousin field of Automated Theorem Proving
(ATP) [31], where proofs of conjectures are attempted fully automatically. Unlike ATP systems,
ITP systems allow human-assisted formalization and proving of theorems that are often beyond the
capabilities of the fully automated systems.
Large ITP libraries include the Mizar Mathematical Library (MML) with over 50,000 lemmas, and
the core Isabelle, HOL, Coq, and ACL2 libraries with thousands of lemmas. These core libraries are a
basis for large projects in formalized mathematics and software and hardware verification. Examples
in mathematics include the HOL Light proof of the Kepler conjecture (Flyspeck project) [14], the
Coq proofs of the Feit-Thompson theorem [12] and Four Color theorem [11], and the verification of
most of the Compendium of Continuous Lattices in Mizar [2]. ITP verification of the seL4 kernel [25]
and CompCert compiler [27] show comparable progress in large scale software verification. While
these large projects mark a coming of age of formalization, ITP remains labor-intensive. For example,
Flyspeck took about 20 person-years, with twice as much for Feit-Thompson. Behind this cost are
our two bottlenecks: lack of tools for autoformalization and strong proof automation.
Recently the field of Automated Reasoning in Large Theories (ARLT) [35] has developed, including
AI/ATP/ITP (AITP) systems called hammers that assist ITP formalization [4]. Hammers analyze
the full set of theorems and proofs in the ITP libraries, estimate the relevance of each theorem, and
apply optimized translations from the ITP logic to simpler ATP formalism. Then they attack new
conjectures using the most promising combinations of existing theorems and ATP search strategies.
Recent evaluations have proved 40% of all Mizar and Flyspeck theorems fully automatically [20, 21].
However, there is significant room for improvement: with perfect premise selection (a perfect choice
of library facts) ATPs can prove at least 56% of Mizar and Flyspeck instead of today?s 40% [4]. In
the next section we explain the premise selection task and the experimental setting for measuring
such improvements.
3
Premise Selection, Experimental Setting and Previous Results
Given a formal corpus of facts and proofs expressed in an ATP-compatible format, our task is
Definition (Premise selection problem). Given a large set of premises P, an ATP system A with
given resource limits, and a new conjecture C, predict those premises from P that will most likely
lead to an automatically constructed proof of C by A.
We use the Mizar Mathematical Library (MML) version 4.181.11473 as the formal corpus and E
prover [32] version 1.9 as the underlying ATP system. The following list exemplifies a small non3
ftp://mizar.uwb.edu.pl/pub/system/i386-linux/mizar-7.13.01_4.181.
1147-i386-linux.tar
2
:: t99_jordan: Jordan curve theorem in Mizar
for C being Simple_closed_curve holds C is Jordan;
:: Translation to first order logic
fof(t99_jordan, axiom, (! [A] : ( (v1_topreal2(A) & m1_subset_1(A,
k1_zfmisc_1(u1_struct_0(k15_euclid(2))))) => v1_jordan1(A)) ) ).
Figure 1: (top) The final statement of the Mizar formalization of the Jordan curve theorem. (bottom) The
translation to first-order logic, using name mangling to ensure uniqueness across the entire corpus.
(a) Length in chars.
(b) Length in words.
(c) Word occurrences.
(d) Dependencies.
Figure 2: Histograms of statement lengths, occurrences of each word, and statement dependencies in the
Mizar corpus translated to first order logic. The wide length distribution poses difficulties for RNN models and
batching, and many rarely occurring words make it important to take definitions of words into account.
representative sample of topics and theorems that are included in the Mizar Mathematical Library:
Cauchy-Riemann Differential Equations of Complex Functions, Characterization and Existence of
Gr?bner Bases, Maximum Network Flow Algorithm by Ford and Fulkerson, G?del?s Completeness
Theorem, Brouwer Fixed Point Theorem, Arrow?s Impossibility Theorem Borsuk-Ulam Theorem,
Dickson?s Lemma, Sylow Theorems, Hahn Banach Theorem, The Law of Quadratic Reciprocity,
Pepin?s Primality Test for Public-Key Cryptography, Ramsey?s Theorem.
This version of MML was used for the latest AITP evaluation reported in [21]. There are 57,917
proved Mizar theorems and unnamed top-level lemmas in this MML organized into 1,147 articles.
This set is chronologically ordered by the order of articles in MML and by the order of theorems in
the articles. Proofs of later theorems can only refer to earlier theorems. This ordering also applies
to 88,783 other Mizar formulas (encoding the type system and other automation known to Mizar)
used in the problems. The formulas have been translated into first-order logic formulas by the MPTP
system [34] (see Figure 1).
Our goal is to automatically prove as many theorems as possible, using at each step all previous
theorems and proofs. We can learn from both human proofs and ATP proofs, but previous experiments [26, 20] show that learning only from the ATP proofs is preferable to including human proofs
if the set of ATP proofs is sufficiently large. Since for 32,524 (56.2%) of the 57,917 theorems an ATP
proof was previously found by a combination of manual and learning-based premise selection [21],
we use only these ATP proofs for training.
The 40% success rate from [21] used a portfolio of 14 AITP methods using different learners, ATPs,
and numbers of premises. The best single method proved 27.3% of the theorems. Only fast and
simple learners such as k-nearest-neighbors, naive Bayes, and their ensembles were used, based on
hand-crafted features such as the set of (normalized) sub-terms and symbols in each formula.
4
Motivation for the use of Deep Learning
Strong premise selection requires models capable of reasoning over mathematical statements, here
encoded as variable-length strings of first-order logic. In natural language processing, deep neural networks have proven useful in language modeling [28], text classification [8], sentence pair scoring [3],
conversation modeling [36], and question answering [33]. These results have demonstrated the ability
of deep networks to extract useful representations from sequential inputs without hand-tuned feature
engineering. Neural networks can also mimic some higher-level reasoning on simple algorithmic
tasks [38, 18].
3
Logistic loss
Maximum
Fully connected layer with 1
output
Ux+c
Fully connected layer with
1024 outputs
Concatenate embeddings
CNN/RNN Sequence model
CNN/RNN Sequence model
Axiom first order logic
sequence
Conjecture first order logic
sequence
Wx+b
!
[
Ux+c
Wx+b
A
,
Wx+b
B
]
Ux+c
Wx+b
:
(
Wx+b
g
t
a
...
Figure 3: (left) Our network structure. The input sequences are either character-level (section 5.1) or word-level
(section 5.2). We use separate models to embed conjecture and axiom, and a logistic layer to predict whether the
axiom is useful for proving the conjecture. (right) A convolutional model.
The Mizar data set is also an interesting case study in neural network sequence tasks, as it differs
from natural language problems in several ways. It is highly structured with a simple context free
grammar ? the interesting task occurs only after parsing. The distribution of lengths is wide, ranging
from 5 to 84,299 characters with mean 304.5, and from 2 to 21,251 tokens with mean 107.4 (see
Figure 2). Fully recurrent models would have to back-propagate through 100s to 1000s of characters
or 100s of tokens to embed a whole statement. Finally, there are many rare words ? 60.3% of the
words occur fewer than 10 times ? motivating the definition-aware embeddings in section 5.2.
5
Overview of our approach
The full premise selection task takes a conjecture and a set of axioms and chooses a subset of
axioms to pass to the ATP. We simplify from subset selection to pairwise relevance by predicting the
probability that a given axiom is useful for proving a given conjecture. This approach depends on a
relatively sparse dependency graph. Our general architecture is shown in Figure 3(left): the conjecture
and axiom sequences are separately embedded into fixed length real vectors, then concatenated and
passed to a third network with two fully connected layers and logistic loss. During training time, the
two embedding networks and the joined predictor path are trained jointly.
As discussed in section 3, we train our models on premise selection data generated by a combination
of various methods, including k-nearest-neighbor search on hand-engineered similarity metrics. We
start with a first stage of character-level models, and then build second and later stages of word-level
models on top of the results of earlier stages.
5.1
Stage 1: Character-level models
We begin by avoiding special purpose engineering by treating formulas on the character-level using
an 80 dimensional one-hot encoding of the character sequence. These sequences are passed to a
weight shared network for variable length input. For the embedding computation, we have explored
the following architectures:
1. Pure recurrent LSTM [17] and GRU [6] networks.
2. A pure multi-layer convolutional network with various numbers of convolutional layers (with strides)
followed by a global temporal max-pooling reduction (see Figure 3(right)).
3. A recurrent-convolutional network, that uses convolutional layers to produce a shorter sequence which
is processed by a LSTM.
The exact architectures used are specified in the experimental section.
It is computationally prohibitive to compute a large number of (conjecture, axiom) pairs due to the
costly embedding phase. Fortunately, our architecture allows caching the embeddings for conjectures
and axioms and evaluating the shared portion of the network for a given pair. This makes it practical
to consider all pairs during evaluation.
5.2
Stage 2: Word-level models
The character-level models are limited to word and structure similarity within the axiom or conjecture
being embedded. However, many of the symbols occurring in a formula are defined by formulas
4
earlier in the corpus, and we can use the axiom-embeddings of those symbols to improve model
performance.
Since Mizar is based on first-order set theory, definitions of symbols can be either explicit or implicit.
An explicit definition of x sets x = e for some expression e, while an implicit definition states a
property of the defined object, such as defining a function f (x) by ?x.f (f (x)) = g(x). To avoid
manually encoding the structure of implicit definitions, we embed the entire statement defining a
symbol f , and then use the stage 1 axiom-embedding corresponding to the whole statement as a
word-level embeddings.
Ideally, we would train a single network that embeds statements by recursively expanding and
embedding the definitions of the defined symbols. Unfortunately, this recursion would dramatically
increase the cost of training since the definition chains can be quite deep. For example, Mizar defines
real numbers in terms of non-negative reals, which are defined as Dedekind cuts of non-negative
rationals, which are defined as ratios of naturals, etc. As an inexpensive alternative, we reuse the
axiom embeddings computed by a previously trained character-level model, mapping each defined
symbol to the axiom embedding of its defining statement. Other tokens such as brackets and operators
are mapped to fixed pseudo-random vectors of the same dimension.
Since we embed one token at a time ignoring the grammatical structure, our approach does not require
a parser: a trivial lexer is implemented in a few lines of Python. With word-level embeddings, we use
the same architectures with shorter input sequence to produce axiom and conjecture embeddings for
ranking the (conjecture, axiom) pairs. Iterating this approach by using the resulting, stronger axiom
embeddings as word embeddings multiple times for additional stages did not yield measurable gains.
6
6.1
Experiments
Experimental Setup
For training and evaluation we use a subset of 32,524 out of 57,917 theorems that are known to
be provable by an ATP given the right set of premises. We split off a random 10% of these (3,124
statements) for testing and validation. Also, we held out 400 statements from the 3,124 for monitoring
training progress, as well as for model and checkpoint selection. Final evaluation was done on the
remaining 2,724 conjectures. Note that we only held out conjectures, but we trained on all statements
as axioms. This is comparable to our k-NN baseline which is also trained on all statements as axioms.
The randomized selection of the training and testing sets may also lead to learning from future proofs:
a proof Pj of theorem Tj written after theorem Ti may guide the premise selection for Ti . However,
previous k-NN experiments show similar performance between a full 10-fold cross-validation and
incremental evaluation as long as chronologically preceding formulas participate in proofs of only
later theorems.
6.2
Metrics
For each conjecture, our models output a ranking of possible premises. Our primary metric is the
number of conjectures proved from the top-k premises, where k = 16, 32, . . . , 1024. This metric can
accommodate alternative proofs but is computationally expensive. Therefore we additionally measure
the ranking quality using the average maximum relative rank of the testing premise set. Formally,
average max relative rank is
rank(P, Pavail (C))
aMRR = mean max
C P ?Ptest (C)
|Pavail (C)|
where C ranges over conjectures, Pavail (C) is the set of premises available to prove C, Ptest (C) is the
set of premises for conjecture C from the test set, and rank(P, Pavail (C)) is the rank of premise P
among the set Pavail (C) according to the model. The motivation for aMRR is that conjectures are
easier to prove if all their dependencies occur early in the ranking.
Since it is too expensive to rank all axioms for a conjecture during continuous evaluation, we
approximate our objective. For our holdout set of 400 conjectures, we select all true dependencies
Ptest (C) and 128 fixed random false dependencies from Pavail (C) ? Ptest (C) and compute the average
max relative rank in this ordering. Note that aMRR is nonzero even if all true dependencies are
ordered before false dependencies; the best possible value is 0.051.
5
Figure 4: Specification of the different embedder networks.
6.3
Network Architectures
All our neural network models use the general architecture from Fig 3: a classifier on top of the
concatenated embeddings of an axiom and a conjecture. The same classifier architecture was used for
all models: a fully-connected neural network with one hidden layer of size 1024. For each model, the
axiom and conjecture embedding networks have the same architecture without sharing weights. The
details of the embedding networks are shown in Fig 4.
6.4
Network Training
The neural networks were trained using asynchronous distributed stochastic gradient descent using
the Adam optimizer [24] with up to 20 parallel NVIDIA K-80 GPU workers per model. We used the
TensorFlow framework [1] and the Keras library [5]. The weights were initialized using [10]. Polyak
averaging with 0.9999 decay was used for producing the evaluation weights [30]. The character
level models were trained with maximum sequence length 2048 characters, where the word-level
(and definition embedding) based models had a maximum sequence length of 500 words. For good
performance, especially for low cutoff thresholds, it was critical to employ negative mining during
training. A side process was continuously evaluating many (conjecture, axiom) pairs. For each
conjecture, we pick the lowest scoring statements that have higher score than the lowest scoring true
positive. A queue of previously mined negatives is maintained for producing a mixture of examples
in which the ratio of mined instances is about 25% and the rest are randomly selected premises.
Negative mining was crucial for good quality: at the top-16 cutoff, the number of proved theorems
on the test set has doubled. For the union of proof attempts over all cutoff thresholds, the ratio of
successful proofs has increased from 61.3% to 66.4% for the best neural model.
6.5
Experimental Results
Our best selection pipeline uses a stage-1 character-level convolutional neural network model to
produce word-level embeddings for the second stage. The baseline uses distance-weighted kNN [19, 21] with handcrafted semantic features [22]. For all conjectures in our holdout set, we
consider all the chronologically preceding statements (lemmas, definitions and axioms) as premise
6
(a) Training accuracy for different character-level
models without hard negative mining. Recurrent
models seem underperform, while pure convolutional
models yield the best results. For each architecture,
we trained three models with different random initialization seeds. Only the best runs are shown on this
graph; we did not see much variance between runs
on the same architecture.
(b) Test average max relative rank for different models without hard negative mining. The best is a
word-level CNN using definition embeddings from
a character-level 2-layer CNN. An identical wordembedding model with random starting embedding
overfits after only 250,000 iterations and underperforms the best character-level model.
candidates. In the DeepMath case, premises were ordered by their logistic scores. E prover was
applied to the top-k of the premise-candidates for each of the cutoffs k ? (16, 32, . . . , 1024) until a
proof is found or k = 1024 fails. Table 1 reports the number of theorems proved with a cutoff value
at most the k in the leftmost column. For E prover, we used auto strategy with a soft time limit of 90
seconds, a hard time limit of 120 seconds, a memory limit of 4 GB, and a processed clauses limit of
500,000.
Our most successful models employ simple convolutional networks followed by max pooling (as
opposed to recurrent networks like LSTM/GRU), and the two stage definition-based def-CNN
outperforms the na?ve word-CNN word embedding significantly. In the latter the word embeddings
were learned in a single pass; in the former they are fixed from the stage-1 character-level model. For
each architecture (cf. Figure 4) two convolutional layers perform best. Although our models differ
significantly from each other, they differ even more from the k-NN baseline based on hand-crafted
features. The right column of Table 1 shows the result if we average the prediction score of the stage-1
model with that of the definition based stage-2 model. We also experimented with character-based
RNN models using shorter sequences: these lagged behind our long-sequence CNN models but
performed significantly better than those RNNs trained on longer sequences. This suggest that RNNs
could be improved by more sophisticated optimization techniques such as curriculum learning.
Cutoff
16
32
64
128
256
512
1024
k-NN Baseline (%)
674 (24.6)
1081 (39.4)
1399 (51)
1612 (58.8)
1709 (62.3)
1762 (64.3)
1786 (65.1)
char-CNN (%)
687 (25.1)
1028 (37.5)
1295 (47.2)
1534 (55.9)
1656 (60.4)
1711 (62.4)
1762 (64.3)
word-CNN (%)
709 (25.9)
1063 (38.8)
1355 (49.4)
1552 (56.6)
1635 (59.6)
1712 (62.4)
1755 (64)
def-CNN-LSTM (%)
644 (23.5)
924 (33.7)
1196 (43.6)
1401 (51.1)
1519 (55.4)
1593 (58.1)
1647 (60.1)
def-CNN (%)
734 (26.8)
1093 (39.9)
1381 (50.4)
1617 (59)
1708 (62.3)
1780 (64.9)
1822 (66.4)
def+char-CNN (%)
835 (30.5)
1218 (44.4)
1470 (53.6)
1695 (61.8)
1780 (64.9)
1830 (66.7)
1862 (67.9)
Table 1: Results of ATP premise selection experiments with hard negative mining on a test set of 2,742 theorems.
Each entry is the number (%) of theorems proved by E prover using that particular model to rank the premises.
The union of def-CNN and char-CNN proves 69.8% of the test set, while the union of the def-CNN and k-NN
proves 74.25%. This means that the neural network predictions are more complementary to the k-NN predictions
than to other neural models. The union of all methods proves 2218 theorems (80.9%) and just the neural models
prove 2151 (78.4%).
7
Conclusions
In this work we provide evidence that even simple neural models can compete with hand-engineered
features for premise selection, helping to find many new proofs. This translates to real gains in
7
Model
char-CNN
word-CNN
def-CNN-LSTM
def-CNN
Test min average relative rank
0.0585
0.06
0.0605
0.0575
(b) Best sustained test results obtained by the above
models. Lower values are better. This was monitored continuously during training on a holdout set
with 400 theorems, using all true positive premises
and 128 randomly selected negatives. In this setup,
the lowest attainable max average relative rank with
perfect predictions is 0.051.
(a) Jaccard similarities between proved sets of conjectures across models. Each of the neural network
model prediction are more like each other than those
of the k-NN baseline.
automatic theorem proving. Despite these encouraging results, our models are relatively shallow
networks with inherent limitations to representational power and are incapable of capturing high level
properties of mathematical statements. We believe theorem proving is a challenging and important
domain for deep learning methods, and that more sophisticated optimization techniques and training
methodologies will prove more useful than in less structured domains.
8
Acknowledgments
We would like to thank Cezary Kaliszyk for providing us with an improved baseline model. Also
many thanks go to the Google Brain team for their generous help with the training infrastructure. We
would like to thank Quoc Le for useful discussions on the topic and to Sergio Guadarrama for his
help with TensorFlow-slim.
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser,
M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on
heterogeneous systems, 2015. Software available from tensorflow.org.
[2] G. Bancerek and P. Rudnicki. A Compendium of Continuous Lattices in MIZAR. J. Autom. Reasoning,
29(3-4):189?224, 2002.
[3] P. Baudi?, J. Pichl, T. Vysko?cil, and J. ?ediv?. Sentence pair scoring: Towards unified framework for text
comprehension. arXiv preprint arXiv:1603.06127, 2016.
[4] J. C. Blanchette, C. Kaliszyk, L. C. Paulson, and J. Urban. Hammering towards QED. J. Formalized
Reasoning, 9(1):101?148, 2016.
[5] F. Chollet. Keras. https://github.com/fchollet/keras, 2015.
[6] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Gated feedback recurrent neural networks. arXiv preprint
arXiv:1502.02367, 2015.
[7] The Coq Proof Assistant. http://coq.inria.fr.
[8] A. M. Dai and Q. V. Le. Semi-supervised sequence learning. In Advances in Neural Information Processing
Systems, pages 3061?3069, 2015.
[9] N. de Bruijn. The mathematical language AUTOMATH, its usage, and some of its extensions. In M. Laudet,
editor, Proceedings of the Symposium on Automatic Demonstration, pages 29?61, Versailles, France, Dec.
1968. Springer-Verlag LNM 125.
[10] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In
International conference on artificial intelligence and statistics, pages 249?256, 2010.
[11] G. Gonthier. The four colour theorem: Engineering of a formal proof. In D. Kapur, editor, Computer
Mathematics, 8th Asian Symposium, ASCM 2007, Singapore, December 15-17, 2007. Revised and Invited
Papers, volume 5081 of Lecture Notes in Computer Science, page 333. Springer, 2007.
8
[12] G. Gonthier, A. Asperti, J. Avigad, Y. Bertot, C. Cohen, F. Garillot, S. L. Roux, A. Mahboubi, R. O?Connor,
S. O. Biha, I. Pasca, L. Rideau, A. Solovyev, E. Tassi, and L. Th?ry. A machine-checked proof of the Odd
Order Theorem. In S. Blazy, C. Paulin-Mohring, and D. Pichardie, editors, ITP, volume 7998 of LNCS,
pages 163?179. Springer, 2013.
[13] A. Grabowski, A. Korni?owicz, and A. Naumowicz. Mizar in a nutshell. J. Formalized Reasoning,
3(2):153?245, 2010.
[14] T. C. Hales, M. Adams, G. Bauer, D. T. Dang, J. Harrison, T. L. Hoang, C. Kaliszyk, V. Magron,
S. McLaughlin, T. T. Nguyen, T. Q. Nguyen, T. Nipkow, S. Obua, J. Pleso, J. Rute, A. Solovyev, A. H. T.
Ta, T. N. Tran, D. T. Trieu, J. Urban, K. K. Vu, and R. Zumkeller. A formal proof of the Kepler conjecture.
CoRR, abs/1501.02155, 2015.
[15] J. Harrison. HOL Light: A tutorial introduction. In M. K. Srivas and A. J. Camilleri, editors, FMCAD,
volume 1166 of LNCS, pages 265?269. Springer, 1996.
[16] J. Harrison, J. Urban, and F. Wiedijk. History of interactive theorem proving. In J. H. Siekmann, editor,
Computational Logic, volume 9 of Handbook of the History of Logic, pages 135 ? 214. North-Holland,
2014.
[17] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[18] ?. Kaiser and I. Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
[19] C. Kaliszyk and J. Urban. Stronger automation for Flyspeck by feature weighting and strategy evolution.
In J. C. Blanchette and J. Urban, editors, PxTP 2013, volume 14 of EPiC Series, pages 87?95. EasyChair,
2013.
[20] C. Kaliszyk and J. Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom. Reasoning,
53(2):173?213, 2014.
[21] C. Kaliszyk and J. Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245?256, 2015.
[22] C. Kaliszyk, J. Urban, and J. Vyskocil. Efficient semantic features for automated reasoning over large
theories. In Q. Yang and M. Wooldridge, editors, Proceedings of the Twenty-Fourth International Joint
Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages
3084?3090. AAAI Press, 2015.
[23] M. Kaufmann and J. S. Moore. An ACL2 tutorial. In Mohamed et al. [29], pages 17?21.
[24] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[25] G. Klein, J. Andronick, K. Elphinstone, G. Heiser, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt,
R. Kolanski, M. Norrish, T. Sewell, H. Tuch, and S. Winwood. seL4: formal verification of an operatingsystem kernel. Commun. ACM, 53(6):107?115, 2010.
[26] D. Kuehlwein and J. Urban. Learning from multiple proofs: First experiments. In P. Fontaine, R. A.
Schmidt, and S. Schulz, editors, PAAR-2012, volume 21 of EPiC Series, pages 82?94. EasyChair, 2013.
[27] X. Leroy. Formal verification of a realistic compiler. Commun. ACM, 52(7):107?115, 2009.
[28] T. Mikolov, M. Karafi?t, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based
language model. In INTERSPEECH, volume 2, page 3, 2010.
[29] O. A. Mohamed, C. A. Mu?oz, and S. Tahar, editors. Theorem Proving in Higher Order Logics, 21st
International Conference, TPHOLs 2008, Montreal, Canada, August 18-21, 2008. Proceedings, volume
5170 of LNCS. Springer, 2008.
[30] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on
Control and Optimization, 30(4):838?855, 1992.
[31] J. A. Robinson and A. Voronkov, editors. Handbook of Automated Reasoning (in 2 volumes). Elsevier and
MIT Press, 2001.
[32] S. Schulz. E - A Brainiac Theorem Prover. AI Commun., 15(2-3):111?126, 2002.
[33] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in Neural Information
Processing Systems, pages 2431?2439, 2015.
[34] J. Urban. MPTP 0.2: Design, implementation, and initial experiments. JAR, 37(1-2):21?43, 2006.
[35] J. Urban and J. Vysko?cil. Theorem proving in large formal mathematics as an emerging AI field. In M. P.
Bonacina and M. E. Stickel, editors, Automated Reasoning and Mathematics: Essays in Memory of William
McCune, volume 7788 of LNAI, pages 240?257. Springer, 2013.
[36] O. Vinyals and Q. Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
[37] M. Wenzel, L. C. Paulson, and T. Nipkow. The Isabelle framework. In Mohamed et al. [29], pages 33?38.
[38] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
9
| 6280 |@word repository:1 version:3 cnn:19 stronger:2 calculus:1 underperform:1 heiser:1 propagate:1 essay:1 attainable:1 pick:1 accommodate:1 recursively:1 reduction:1 initial:1 series:2 score:3 pub:1 itp:13 tuned:1 outperforms:2 existing:2 ramsey:1 guadarrama:1 com:7 steiner:1 gmail:1 written:2 parsing:2 gpu:1 devin:1 concatenate:1 realistic:1 wx:5 christian:1 treating:1 juditsky:1 sukhbaatar:1 isard:1 fewer:1 prohibitive:1 selected:2 intelligence:2 core:2 paulin:1 short:1 uwb:1 infrastructure:2 characterization:1 completeness:1 kepler:2 attack:1 org:1 simpler:1 embedder:1 mathematical:11 constructed:1 olah:1 differential:1 symposium:2 abadi:1 prove:6 sustained:1 pairwise:1 multi:1 brain:1 ry:1 riemann:1 automatically:4 encouraging:1 dang:1 spain:1 project:3 underlying:1 begin:1 lowest:3 underpins:1 string:1 emerging:1 developed:1 unified:1 argentina:1 impractical:1 temporal:1 pseudo:1 act:1 ti:2 interactive:2 nutshell:1 preferable:1 zaremba:1 classifier:2 hit:1 control:1 grant:1 producing:2 before:1 positive:2 engineering:3 limit:5 despite:1 encoding:3 path:1 inria:1 rnns:2 twice:1 initialization:1 challenging:1 limited:3 range:1 practical:1 acknowledgment:1 testing:3 vu:1 union:4 differs:1 lncs:3 axiom:25 rnn:4 significantly:3 burget:1 word:23 suggest:1 doubled:1 selection:23 operator:1 context:1 writing:1 measurable:1 dean:1 demonstrated:1 latest:1 go:1 starting:1 thompson:2 resolution:1 formalized:6 roux:1 pure:3 fill:2 sel4:2 shlens:1 financial:1 his:1 proving:14 fulkerson:1 embedding:11 today:3 parser:1 exact:1 us:3 goodfellow:1 infrequently:1 expensive:2 cut:1 bottom:1 preprint:6 verifying:1 thousand:1 connected:4 ordering:2 mu:1 ideally:1 hol:4 trained:8 algebra:1 serve:1 learner:2 basis:1 translated:2 joint:1 chip:1 various:3 train:2 provers:1 fast:1 artificial:2 exhaustive:1 quite:1 encoded:2 grammar:1 ability:2 statistic:1 knn:1 jointly:1 itself:1 ford:1 final:2 sequence:19 took:1 propose:1 tran:1 coming:1 slim:1 fr:1 relevant:1 date:1 representational:1 oz:1 sutskever:3 ijcai:1 ulam:1 produce:5 perfect:3 incremental:1 adam:3 object:2 ftp:1 help:2 develop:1 recurrent:8 pose:1 montreal:1 nearest:2 odd:1 progress:2 strong:4 implemented:1 ois:1 differ:2 hammer:2 filter:1 stochastic:3 human:7 engineered:4 char:5 unnamed:1 public:1 require:1 premise:33 generalization:1 comprehension:1 extension:1 pl:1 assisted:2 hold:1 helping:1 sufficiently:1 considered:1 seed:1 algorithmic:1 predict:2 mapping:1 optimizer:1 early:1 generous:1 purpose:1 uniqueness:1 nipkow:2 assistant:1 combinatorial:1 superposition:1 deductive:1 correctness:2 tool:2 weighted:1 voronkov:1 mit:1 feit:2 caching:1 avoid:1 tar:1 exemplifies:1 focus:1 improvement:2 rank:11 impossibility:1 baseline:6 elsevier:1 nn:7 cock:1 entire:2 lnai:1 hidden:1 france:1 schulz:2 semantics:1 josef:2 classification:1 among:1 development:3 art:1 special:1 equal:1 aware:2 field:4 manually:2 identical:1 yu:1 mimic:1 future:1 report:1 simplify:1 inherent:1 few:1 employ:2 aire:1 randomly:2 ve:1 asian:1 phase:1 william:1 attempt:1 ab:1 highly:1 mining:5 zheng:1 evaluation:8 mixture:1 bracket:1 light:3 behind:2 tj:1 held:2 chain:1 capable:1 explosion:1 worker:1 shorter:3 initialized:1 guidance:1 instance:1 formalism:1 soft:1 earlier:3 modeling:2 dickson:1 increased:1 column:2 measuring:1 lattice:2 cost:2 subset:3 rare:1 entry:1 predictor:1 successful:3 gr:1 too:1 motivating:1 reported:1 dependency:8 kudlur:1 chooses:1 cho:1 person:1 thanks:1 lstm:5 randomized:1 international:3 st:1 siam:1 off:1 discipline:1 alemi:2 quickly:1 continuously:2 na:1 linux:2 aaai:1 opposed:1 wenzel:1 sewell:1 usable:1 chung:1 gonthier:2 szegedy:2 account:1 wooldridge:1 de:2 stride:1 automation:4 north:1 inc:5 ranking:4 depends:1 vi:1 later:3 performed:1 analyze:1 overfits:1 compiler:2 start:1 bayes:1 portion:1 complicated:1 capability:1 parallel:1 jia:1 contribution:2 accuracy:1 convolutional:10 variance:1 kaufmann:1 ensemble:1 yield:3 coq:5 handengineered:1 monitoring:1 history:2 explain:1 manual:1 sharing:1 checked:1 definition:16 inexpensive:1 tucker:1 mohamed:3 proof:38 monitored:1 rational:1 gain:2 proved:8 holdout:3 logical:1 knowledge:3 color:1 improves:1 conversation:1 organized:1 wicke:1 sophisticated:2 back:2 higher:3 ta:1 day:1 supervised:1 methodology:1 improved:2 done:1 execute:1 ptest:4 stage:14 implicit:3 just:1 until:1 hand:7 lack:3 google:11 del:1 defines:1 logistic:4 quality:2 scientific:1 believe:1 name:1 effect:1 usage:1 normalized:1 true:4 former:1 vasudevan:1 evolution:1 nonzero:1 moore:2 semantic:5 irving:2 during:5 interspeech:1 maintained:1 davis:1 levenberg:1 leftmost:1 complete:1 reasoning:18 ranging:1 recently:1 superior:1 functional:1 clause:1 overview:1 cohen:1 handcrafted:1 volume:10 banach:1 discussed:1 significant:1 isabelle:3 refer:1 jozefowicz:1 connor:1 ai:3 atp:20 automatic:2 mathematics:8 erc:1 language:5 portfolio:1 had:1 specification:1 similarity:3 longer:1 etc:1 base:1 sergio:1 recent:1 commun:3 wattenberg:1 schmidhuber:1 nvidia:1 kaliszyk:7 verlag:1 incapable:1 success:1 blanchette:2 scoring:4 fortunately:1 additional:1 preceding:2 dai:1 corrado:1 july:1 semi:1 solovyev:2 full:3 multiple:2 technical:1 bruijn:2 cross:1 long:3 autom:3 prediction:5 underlies:1 heterogeneous:1 metric:4 arxiv:12 histogram:1 kernel:2 iteration:1 monga:1 agarwal:1 qed:1 underperforms:1 dec:1 hochreiter:1 background:1 separately:1 harrison:3 crucial:2 invited:1 rest:2 unlike:1 warden:1 pooling:2 december:1 flow:1 effectiveness:1 prague:1 jordan:3 seem:1 kera:3 yang:1 feedforward:1 split:1 embeddings:15 bengio:2 automated:14 een:2 architecture:13 polyak:2 barham:1 translates:1 intensive:1 cousin:1 mclaughlin:1 bottleneck:5 whether:1 expression:1 rute:1 assist:2 passed:2 reuse:1 gb:1 colour:1 queue:1 deep:9 dramatically:1 useful:7 iterating:1 listed:1 verifiable:1 aiding:1 hardware:1 processed:2 epic:2 reduced:1 http:2 cezary:1 singapore:1 tutorial:2 per:1 klein:1 intertwined:1 lnm:1 key:1 four:2 harp:1 threshold:2 urban:13 pj:1 cutoff:6 graph:2 chollet:2 chronologically:3 year:1 run:2 compete:1 talwar:1 fourth:1 fran:1 jaccard:1 comparable:2 capturing:1 layer:10 def:8 followed:2 mined:2 fold:1 quadratic:1 leroy:1 occur:2 deepmath:2 software:3 compendium:2 min:1 conversational:1 mikolov:1 format:1 conjecture:32 relatively:2 gpus:1 structured:2 developing:1 according:1 combination:3 across:2 character:17 karafi:1 shallow:1 making:1 quoc:1 pipeline:1 computationally:2 resource:1 equation:1 remains:1 previously:3 mml:5 end:2 informal:1 gulcehre:1 available:2 brevdo:1 apply:1 batching:1 occurrence:2 alternative:2 schmidt:1 existence:1 top:7 remaining:1 include:3 ensure:1 brouwer:1 cf:1 paulson:2 concatenated:2 hahn:1 build:1 especially:1 prof:3 murray:1 objective:1 already:1 consolidator:1 prover:5 question:1 strategy:3 occurs:1 costly:1 primary:1 nr:1 kaiser:2 gradient:1 distance:1 separate:1 mapped:1 thank:2 participate:1 transit:1 topic:2 cauchy:1 trivial:1 provable:1 engelhardt:1 length:10 reciprocity:1 ratio:3 demonstration:2 providing:1 setup:2 unfortunately:2 statement:16 negative:9 ba:1 lagged:1 design:1 implementation:1 understandable:2 twenty:1 perform:1 gated:1 revised:1 descent:1 kapur:1 gas:1 defining:3 communication:1 team:1 niklas:1 august:1 canada:1 pair:7 gru:2 specified:1 optimized:1 sentence:2 learned:1 czech:1 tensorflow:4 barcelona:1 kingma:1 nip:1 robinson:1 beyond:1 including:4 max:7 memory:4 power:1 hot:1 critical:1 difficulty:2 hybrid:1 natural:3 predicting:1 cernock:1 curriculum:1 recursion:1 improve:1 github:1 library:9 naive:1 extract:1 auto:1 text:3 understanding:1 python:1 relative:6 law:1 embedded:2 fully:9 bner:1 loss:2 lecture:1 interesting:2 limitation:1 proven:1 geoffrey:1 age:1 validation:2 hoang:1 vanhoucke:1 verification:6 article:3 editor:11 translation:3 compatible:1 fchollet:2 supported:1 last:1 free:1 token:4 asynchronous:1 formal:10 guide:2 understand:1 allow:1 side:1 wide:2 neighbor:2 sparse:1 distributed:1 grammatical:1 curve:2 dimension:1 feedback:1 evaluating:2 bauer:1 author:1 nguyen:2 alphabetically:1 approximate:1 logic:11 global:1 handbook:2 corpus:10 fergus:1 continuous:3 search:2 decade:1 table:3 additionally:1 promising:1 learn:4 expanding:1 ignoring:1 complex:3 domain:2 did:2 main:3 arrow:1 motivation:2 whole:2 complementary:2 cryptography:1 ensembling:1 representative:1 crafted:2 fig:2 cil:2 embeds:1 formalization:6 sub:1 fails:1 explicit:2 candidate:2 answering:1 third:1 weighting:1 theorem:50 formula:9 unusable:1 embed:4 wordembedding:1 showing:1 hale:1 ghemawat:1 symbol:9 list:1 explored:1 decay:1 experimented:1 evidence:1 glorot:1 mizar:23 essential:1 false:2 sequential:1 corr:1 occurring:3 gap:2 easier:1 chen:1 jar:1 likely:1 vinyals:2 labor:1 expressed:1 ordered:3 ux:3 khudanpur:1 joined:1 holland:1 applies:1 springer:6 acm:2 weston:1 goal:1 acceleration:1 careful:1 towards:2 room:1 shared:2 man:1 hard:4 included:1 checkpoint:1 averaging:2 lemma:5 called:2 pas:2 experimental:5 attempted:1 citro:1 rarely:1 formally:2 select:1 mark:1 latter:2 alexander:1 relevance:2 buenos:1 avoiding:2 schuster:1 |
5,837 | 6,281 | Optimizing Affinity-Based Binary Hashing
Using Auxiliary Coordinates
Ramin Raziperchikolaei
EECS, University of California, Merced
[email protected]
? Carreira-Perpin?
? an
Miguel A.
EECS, University of California, Merced
[email protected]
Abstract
In supervised binary hashing, one wants to learn a function that maps a highdimensional feature vector to a vector of binary codes, for application to fast image retrieval. This typically results in a difficult optimization problem, nonconvex
and nonsmooth, because of the discrete variables involved. Much work has simply
relaxed the problem during training, solving a continuous optimization, and truncating the codes a posteriori. This gives reasonable results but is quite suboptimal.
Recent work has tried to optimize the objective directly over the binary codes and
achieved better results, but the hash function was still learned a posteriori, which
remains suboptimal. We propose a general framework for learning hash functions
using affinity-based loss functions that uses auxiliary coordinates. This closes the
loop and optimizes jointly over the hash functions and the binary codes so that
they gradually match each other. The resulting algorithm can be seen as an iterated version of the procedure of optimizing first over the codes and then learning
the hash function. Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally
in various supervised datasets. In addition, our framework facilitates the design of
optimization algorithms for arbitrary types of loss and hash functions.
Information retrieval arises in several applications, most obviously web search. For example, in
image retrieval, a user is interested in finding similar images to a query image. Computationally,
this essentially involves defining a high-dimensional feature space where each relevant image is
represented by a vector, and then finding the closest points (nearest neighbors) to the vector for the
query image, according to a suitable distance. For example, one can use features such as SIFT or
GIST [23] and the Euclidean distance for this purpose. Finding nearest neighbors in a dataset of
N images (where N can be millions), each a vector of dimension D (typically in the hundreds)
is slow, since exact algorithms run essentially in time O(N D) and space O(N D) (to store the
image dataset). In practice, this is approximated, and a successful way to do this is binary hashing
[12]. Here, given a high-dimensional vector x ? RD , the hash function h maps it to a b-bit vector
z = h(x) ? {?1, +1}b, and the nearest neighbor search is then done in the binary space. This
now costs O(N b) time and space, which is orders of magnitude faster because typically b < D
and, crucially, (1) operations with binary vectors (such as computing Hamming distances) are very
fast because of hardware support, and (2) the entire dataset can fit in (fast) memory rather than slow
memory or disk.
The disadvantage is that the results are inexact, since the neighbors in the binary space will not be
identical to the neighbors in the original space. However, the approximation error can be controlled
by using sufficiently many bits and by learning a good hash function. This has been the topic of
much work in recent years. The general approach consists of defining a supervised objective that has
a small value for good hash functions and minimizing it. Ideally, such an objective function should
be minimal when the neighbors of any given image are the same in both original and binary spaces.
Practically, in information retrieval, this is often evaluated using precision and recall. However, this
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
ideal objective cannot be easily optimized over hash functions, and one uses approximate objectives
instead. Many such objectives have been proposed in the literature. We focus here on affinity-based
loss functions, which directly try to preserve the original similarities in the binary space. Specifically,
we consider objective functions of the form
P
min L(h) = N
(1)
n,m=1 L(h(xn ), h(xm ); ynm )
where X = (x1 , . . . , xN ) is the high-dimensional dataset of feature vectors, minh means minimizing over the parameters of the hash function h (e.g. over the weights of a linear SVM), and L(?)
is a loss function that compares the codes for two images (often through their Hamming distance
kh(xn ) ? h(xm )k) with the ground-truth value ynm that measures the affinity in the original space
between the two images xn and xm (distance, similarity or other measure of neighborhood; [12]).
The sum is often restricted to a subset of image pairs (n, m) (for example, within the k nearest
neighbors of each other in the original space), to keep the runtime low. Examples of these objective functions (described below) include models developed for dimension reduction, be they spectral
such as Laplacian Eigenmaps [2] and Locally Linear Embedding [24], or nonlinear such as the Elastic Embedding [4] or t-SNE [26]; as well as objective functions designed specifically for binary
hashing, such as Supervised Hashing with Kernels (KSH) [19], Binary Reconstructive Embeddings
(BRE) [14] or sequential Projection Learning Hashing (SPLH) [29].
If the hash function h was a continuous function of its input x and its parameters, one could simply
apply the chain rule to compute derivatives over the parameters of h of the objective function (1) and
then apply a nonlinear optimization method such as gradient descent. This would be guaranteed to
converge to an optimum under mild conditions (for example, Wolfe conditions on the line search),
which would be global if the objective is convex and local otherwise [21]. Hence, optimally learning
the function h would be in principle doable (up to local optima), although it would still be slow
because the objective can be quite nonlinear and involve many terms.
In binary hashing, the optimization is much more difficult, because in addition to the previous issues, the hash function must output binary values, hence the problem is not just generally nonconvex,
but also nonsmooth. In view of this, much work has sidestepped the issue and settled on a simple
but suboptimal solution. First, one defines the objective function (1) directly on the b-dimensional
codes of each image (rather than on the hash function parameters) and optimizes it assuming continuous codes (in Rb ). Then, one binarizes the codes for each image. Finally, one learns a hash
function given the codes. Optimizing the affinity-based loss function (1) can be done using spectral methods or nonlinear optimization as described above. Binarizing the codes has been done in
different ways, from simply rounding them to {?1, +1} using zero as threshold [18, 19, 30, 33],
to optimally finding a threshold [18], to rotating the continuous codes so that thresholding introduces less error [11, 32]. Finally, learning the hash function for each of the b output bits can
be considered as a binary classification problem, where the resulting classifiers collectively give
the desired hash function, and can be solved using various machine learning techniques. Several
works (e.g. [16, 17, 33]) have used this approach, which does produce reasonable hash functions
(in terms of retrieval measures such as precision/recall).
In order to do better, one needs to take into account during the optimization (rather than after the
optimization) the fact that the codes are constrained to be binary. This implies attempting directly the
discrete optimization of the affinity-based loss function over binary codes. This is a daunting task,
since this is usually an NP-complete problem with N b binary variables altogether, and practical
applications could make this number as large as millions or beyond. Recent works have applied
alternating optimization (with various refinements) to this, where one optimizes over a usually small
subset of binary variables given fixed values for the remaining ones [16, 17], and this did result in
very competitive precision/recall compared with the state-of-the-art. This is still slow and future
work will likely improve it, but as of now it provides an option to learn better binary codes.
Of the three-step suboptimal approach mentioned (learn continuous codes, binarize them, learn hash
function), these works manage to join the first two steps and hence learn binary codes [16, 17]. Then,
one learns the hash function given these binary codes. Can we do better? Indeed, in this paper we
show that all elements of the problem (binary codes and hash function) can be incorporated in a
single algorithm that optimizes jointly over them. Hence, by initializing it from binary codes from
the previous approach, this algorithm is guaranteed to achieve a lower error and learn better hash
functions. Our framework can be seen as an iterated version of the two-step approach: learn binary
codes given the current hash function, learn hash functions given codes, iterate (note the emphasis).
2
The key to achieve this in a principled way is to use a recently proposed method of auxiliary coordinates (MAC) for optimizing ?nested? systems, i.e., consisting of the composition of two or more
functions or processing stages. MAC introduces new variables and constraints that cause decoupling
between the stages, resulting in the mentioned alternation between learning the hash function and
learning the binary codes. Section 1 reviews affinity-based loss functions, section 2 describes our
MAC-based proposed framework, section 3 evaluates it in several supervised datasets, using linear
and nonlinear hash functions, and section 4 discusses implications of this work.
Related work Although one can construct hash functions without training data [1, 15], we focus on methods that learn the hash function given a training set, since they perform better, and our
emphasis is in optimization. The learning can be unsupervised [5, 11], which attempts to preserve
distances in the original space, or supervised, which in addition attempts to preserve label similarity.
Many objective functions have been proposed to achieve this and we focus on affinity-based ones.
These create an affinity matrix for a subset of training points based on their distances (unsupervised)
or labels (supervised) and combine it with a loss function [14, 16, 17, 19, 22]. Some methods optimize this directly over the hash function. For example, Binary Reconstructive Embeddings [14] use
alternating optimization over the weights of the hash functions. Supervised Hashing with Kernels
[19] learns hash functions sequentially by considering the difference between the inner product of
the codes and the corresponding element of the affinity matrix. Although many approaches exist,
a common theme is to apply a greedy approach where one first finds codes using an affinity-based
loss function, and then fits the hash functions to them (usually by training a classifier). The codes
can be found by relaxing the problem and binarizing its solution [18, 30, 33], or by approximately
solving for the binary codes using some form of alternating optimization (possibly combined with
GraphCut), as in two-step hashing [10, 16, 17], or by using relaxation in other ways [19, 22].
1 Nonlinear embedding and affinity-based loss functions for binary hashing
The dimensionality reduction literature has developed a number of objectives of the form (1) (often
called ?embeddings?) where the low-dimensional projection zn ? Rb of each high-dimensional
data point xn ? RD is a free, real-valued parameter. The neighborhood information is encoded in
the ynm values (using labels in supervised problems, or distance-based affinities in unsupervised
problems). An example is the elastic embedding [4], where L(zn , zm ; ynm ) has the form:
2
2
?
+
exp (? kzn ? zm k ), ? > 0
kzn ? zm k + ?ynm
ynm
(2)
+
> 0) close together, while the second
where the first term tries to project true neighbors (having ynm
?
repels all non-neighbors? projections (having ynm > 0) from each other. Laplacian Eigenmaps [2]
and Locally Linear Embedding [24] result from replacing the second term above with a constraint
that fixes the scale of Z, which results in an eigenproblem rather than a nonlinear optimization, but
also produces more distorted embeddings. Other objectives exist, such as t-SNE [26], that do not
separate into functions of pairs of points. Optimizing nonlinear embeddings is quite challenging,
but much progress has been done recently [4, 6, 25, 27, 28, 31]. Although these models were
developed to produce continuous projections, they have been successfully used for binary hashing
too by truncating their codes [30, 33] or using the two-step approach of [16, 17].
Other loss functions have been developed specifically for hashing, where now zn is a b-bit vector
(where binary values are in {?1, +1}). For example (see a longer list in [16]), for Supervised
Hashing with Kernels (KSH) L(zn , zm ; ynm ) has the form
(zTn zm ? bynm )2
(3)
where ynm is 1 if xn , xm are similar and ?1 if they are dissimilar. Binary Reconstructive Embed2
2
dings [14] uses ( 1b kzn ? zm k ? ynm )2 where ynm = 12 kxn ? xm k . The exponential variant of
SPLH [29] proposed by Lin et al. [16] (which we call eSPLH) uses exp(? 1b ynm zTn zn ). Our approach can be applied to any of these loss functions, though we will mostly focus on the KSH loss
for simplicity. When the variables Z are binary, we will call these optimization problems binary
embeddings, in analogy to the more traditional continuous embeddings for dimension reduction.
2 Learning codes and hash functions using auxiliary coordinates
The optimization of the loss L(h) in eq. (1) is difficult because of the thresholded hash function,
which appears as the argument of the loss function L. We use the recently proposed method of
3
auxiliary coordinates (MAC) [7, 8], which is a meta-algorithm to construct optimization algorithms
for nested functions. This proceeds in 3 stages. First, we introduce new variables (the ?auxiliary
coordinates?) as equality constraints into the problem, with the goal of unnesting the function. We
can achieve this by introducing one binary vector zn ? {?1, +1}b for each point. This transforms
the original, unconstrained problem into the following equivalent, constrained problem:
P
(4)
minh,Z N
n=1 L(zn , zm ; ynm ) s.t. z1 = h(x1 ), ? ? ? , zN = h(xN ).
We recognize as the objective function the ?embedding? form of the loss function, except that the
?free? parameters zn are in fact constrained to be the deterministic outputs of the hash function h.
Second, we solve the constrained problem using a penalty method, such as the quadratic-penalty
or augmented Lagrangian [21]. We discuss here the former for simplicity. We solve the following
minimization problem (unconstrained again, but dependent on ?) while progressively increasing ?,
so the constraints are eventually satisfied:
min LP (h, Z; ?) =
N
X
L(zn , zm ; ynm ) + ?
n,m=1
N
X
n=1
kzn ? h(xn )k2
s.t.
z1 , . . . , zN ?
(5)
{?1, +1}b.
2
kzn ? h(xn )k is proportional to the Hamming distance between the binary vectors zn and h(xn ).
Third, we apply alternating optimization over the binary codes Z and the parameters of the hash
function h. This results in iterating the following two steps (described in detail later):
Z step Optimize the binary codes z1 , . . . , zN given h (hence, given the output binary codes
h(x1 ), . . . , h(xN ) for each of the N images). This can be seen as a regularized binary
embedding, because the projections Z are encouraged to be close to the hash function outputs h(X). Here, we try two different approaches [16, 17] with some modifications.
h step Optimize the hash function h given the binary codes Z. This simply means training b binary
classifiers using X as inputs and Z as labels.
This is very similar to the two-step (TSH) approach of Lin et al. [16], except that the latter learns the
codes Z in isolation, rather than given the current hash function, so iterating the two-step approach
would change nothing, and it does not optimize the loss L. More precisely, TSH corresponds to
optimizing LP for ? ? 0+ . In practice, we start from a very small value of ? (hence, initialize MAC
from the result of TSH), and increase ? slowly while optimizing LP , until the equality constraints
are satisfied, i.e., zn = h(xn ) for n = 1, . . . , N . The supplementary material gives the overall
MAC algorithm to learn a hash function by optimizing an affinity-based loss function.
Theoretical results We can prove the following under the assumption that the Z and h steps are
exact (suppl. mat.). 1) The MAC algorithm stops after a finite number of iterations, when Z = h(X)
in the Z step, since then the constraints are satisfied and no more changes will occur to Z or h. 2)
The path over the continuous penalty parameter ? ? [0, ?) is in fact discrete. The minimizer (h, Z)
of LP for ? ? [0, ?1 ] is identical to the minimizer for ? = 0, and the minimizer for ? ? [?2 , ?)
is identical to the minimizer for ? ? ?, where 0 < ?1 < ?2 < ?. Hence, it suffices to take an
initial ? no smaller than ?1 and keep increasing it until the algorithm stops. Besides, the interval
[?1 , ?2 ] is itself partitioned in a finite set of intervals so that the minimizer changes only at interval
boundaries. Hence, theoretically the algorithm needs only run for a finite set of ? values (although
this set can still be very big). In practice, we increase ? more aggressively to reduce the runtime.
This is very different from the quadratic-penalty methods in continuous optimization [21], which
was the setting considered in the original MAC papers [7, 8]. There, the minimizer varies continuously with ?, which must be driven to infinity to converge to a stationary point, and in so doing it
gives rise to ill-conditioning and slow convergence.
2.1 h step: optimization over the parameters of the hash function, given the binary codes
Given the binary codes z1 , . . . , zN , since h does not appear in the first term of LP , this simply
involves finding a hash function h that minimizes
PN
Pb
P
2
2
minh N
n=1 (zni ? hi (xn ))
i=1 minhi
n=1 kzn ? h(xn )k =
where zni ? {?1, +1} is the ith bit of the binary vector zn . Hence, we can find b one-bit hash
functions in parallel and concatenate them into the b-bit hash function. Each of these is a binary
4
classification problem using the number of misclassified patterns as loss. This allows us to use a
regular classifier for h, and even to use a simpler surrogate loss (such as the hinge loss), since this
will also enforce the constraints eventually (as ? increases). For example, we can fit an SVM by
optimizing the margin plus the slack and using a high penalty for misclassified patterns. We discuss
other classifiers in the experiments.
2.2 Z step: optimization over the binary codes, given the hash function
Although the MAC technique has significantly simplified the original problem, the step over Z is
still complex. This involves finding the binary codes given the hash function h, and it is an NPcomplete problem in N b binary variables. Fortunately, some recent works have proposed practical
approaches for this problem based on alternating optimization: a quadratic surrogate method [16],
and a GraphCut method [17]. In both methods, the starting point is to apply alternating optimization
over the ith bit of all points given the remaining bits are fixed for all points (for i = 1, . . . , b), and
to solve the optimization over the ith bit approximately. This would correspond to the first step in
the two-step hashing of Lin et al. [16]. These methods, in their original form, can be applied to the
loss function over binary codes, i.e., the first term in LP . Here, we explain briefly our modification
to these methods to make them work with our Z step objective (the regularized loss function over
codes, i.e., the complete LP ). The full explanation can be found in the supplementary material.
Solution using a quadratic surrogate method [16] This is based on the fact that any loss function
that depends on the Hamming distance of two binary variables can be equivalently written as a
quadratic function of those two binary variables. We can then write the first term in LP as a binary
quadratic problem using a certain matrix A ? RN ?N (computed using the fixed bits), and the second
term (on ?) is also quadratic. The optimization for the ith bit can then be equivalently written as
2
(6)
minz zT Az(i) + ?
z(i) ? hi (X)
s.t. z(i) ? {?1, +1}N
(i)
(i)
where hi (X) = (hi (x1 ), . . . , hi (xN ))T and z(i) are vectors of length N (one bit per data point).
This is still an NP-complete problem (except in special cases), and we approximate it by relaxing it
to a continuous quadratic program (QP) over z(i) ? [?1, 1]N , minimizing it using L-BFGS-B [34]
and binarizing its solution.
Solution using a GraphCut algorithm [17] To optimize LP over the ith bit of each image (given
all the other bits are fixed), we have to minimize the NP-complete problem of eq. (6) over N bits.
We can apply the GraphCut algorithm [3], as proposed by the FastHash algorithm of Lin et al. [17].
This proceeds as follows. First, we assign all the data points to different, possibly overlapping groups
(blocks). Then, we minimize the objective function over the binary codes of the same block, while
all the other binary codes are fixed, then proceed with the next block, etc. (that is, we do alternating
optimization of the bits over the blocks). Specifically, to optimize over the bits in block B, ignoring
the constants, we can rewrite equation (6) in the standard form for the GraphCut algorithm as:
P
P
P
minz(i,B) n?B m?B vnm zni zmi + n?B unm zni
P
where vnm = anm , unm = 2 m6?B anm zmi ? ?hi (xn ). To minimize the objective function using
the GraphCut algorithm, the blocks have to define a submodular function. In our case, this can be
easily achieved by putting points with the same label in one block ([17] give a simple proof of this).
3 Experiments
We have tested our framework with several combinations of loss function, hash function, number
of bits, datasets, and comparing with several state-of-the-art hashing methods (see suppl. mat.). We
report a representative subset to show the flexibility of the approach. We use the KSH (3) [19] and
eSPLH [29] loss functions. We test quadratic surrogate and GraphCut methods for the Z step in
MAC. As hash functions (for each bit), we use linear SVMs (trained with LIBLINEAR; [9]) and
kernel SVMs (with 500 basis functions).
We use the following labeled datasets: (1) CIFAR [13] contains 60 000 images in 10 classes. We use
D = 320 GIST features [23] from each image. We use 58 000 images for training and 2 000 for test.
(2) Infinite MNIST [20]. We generated, using elastic deformations of the original MNIST handwritten digit dataset, 1 000 000 images for training and 2 000 for test, in 10 classes. We represent
each image by a D = 784 vector of raw pixels. Because of the computational cost of affinity-based
methods, previous work has used training sets limited to a few thousand points [14, 16, 19, 22].
We train the hash functions in a subset of 10 000 points of the training set, and report precision and
recall by searching for a test query on the entire dataset (the base set).
5
KSH loss
6
loss function L
5.8
ker?MACcut
lin?MACcut
ker?MACquad
lin?MACquad
ker?cut
lin?cut
ker?quad
lin?quad
ker?KSH
5.6
5.4
5.2
2
4
6
8
10
12
14
5.8
eSPLH loss
KSH precision
x 10
eSPLH precision
49
48
45
5.7
precision
6
x 10
5.6
5.5
45
40
40
35
5.4
2
4
6
8
10
12
30
600
700
800
900
35
1000 600
700
800
900
1000
k
iterations
iterations
k
Figure 1: Loss function L and precision for k retrieved points for KSH and eSPLH loss functions
on CIFAR dataset, using b = 48 bits.
We report precision (and precision/recall in the suppl. mat.) for the test set queries using as ground
truth (set of true neighbors in original space) all the training points with the same label. The retrieved
set contains the k nearest neighbors of the query point in the Hamming space. We report precision
for different values of k to test the robustness of different algorithms.
The main comparison point are the quadratic surrogate and GraphCut methods of Lin et al. [16, 17],
which we denote in this section as quad and cut, respectively, regardless of the hash function that
fits the resulting codes. Correspondingly, we denote the MAC version of these as MACquad and
MACcut, respectively. We use the following schedule for the penalty parameter ? in the MAC
algorithm (regardless of the hash function type or dataset). We initialize Z with ? = 0, i.e., the
result of quad or cut. Starting from ?1 = 0.3 (MACcut) or 0.01 (MACquad), we multiply ? by 1.4
after each iteration (Z and h step).
Our experiments show our MAC algorithm indeed finds hash functions with a significantly and consistently lower objective value than rounding or two-step approaches (in particular, cut and quad);
and that it outperforms other state-of-the-art algorithms on different datasets, with MACcut beating
MACquad most of the time. The improvement in precision makes using MAC well worth the relatively small extra runtime and minimal additional implementation effort it requires. In all our plots,
the vertical arrows indicate the improvement of MACcut over cut and of MACquad over quad.
The MAC algorithm finds better optima The goal of this paper is not to introduce a new affinitybased loss or hash function, but to describe a generic framework to construct algorithms that optimize a given combination thereof. We illustrate its effectiveness here with the CIFAR dataset, with
different sizes of retrieved neighbor sets, and using 16 to 48 bits. We optimize two loss functions
(KSH from eq. (3) and eSPLH), and two hash functions (linear and kernel SVM). In all cases, the
MAC algorithm achieves a better hash function both in terms of the loss and of the precision/recall.
We compare 4 ways of optimizing the loss function: quad [16], cut [17], MACquad and MACcut.
For each point xn in the training set, we use ?+ = 100 positive and ?? = 500 negative neighbors,
chosen at random to have the same or a different label as xn , respectively. Fig. 1 (panels 1 and 3)
shows the KSH loss function for all the methods (including the original KSH method in [19]) over
iterations of the MAC algorithm (KSH, quad and cut do not iterate), as well as precision and recall.
It is clear that MACcut (red lines) and MACquad (magenta lines) reduce the loss function more than
cut (blue lines) and quad (black lines), respectively, as well as the original KSH algorithm (cyan), in
all cases: type of hash function (linear: dashed lines, kernel: solid lines) and number of bits b = 16
to 48 (suppl. mat.). Hence, applying MAC is always beneficial. Reducing the loss nearly always
translates into better precision and recall (with a larger gain for linear than for kernel hash functions,
usually). The gain of MACcut/MACquad over cut/quad is significant, often comparable to the gain
obtained by changing from the linear to the kernel hash function within the same algorithm.
We usually find cut outperforms quad (in agreement with [17]), and correspondingly MACcut outperforms MACquad. Interestingly, MACquad and MACcut end up being very similar even though
they started very differently. This suggests it is not crucial which of the two methods to use in
the MAC Z step, although we still prefer cut, because it usually produces somewhat better optima.
Finally, fig. 1 (panels 2 and 4) also shows the MACcut results using the eSPLH loss. All settings
are as in the first KSH experiment. As before, MACcut outperforms cut in both loss function and
precision/recall using either a linear or a kernel SVM.
Why does MAC learn better hash functions? In both the two-step and MAC approaches, the
starting point are the ?free? binary codes obtained by minimizing the loss over the codes without
6
6
KSH loss
6
x 10
5.6
loss function L
5.5
eSPLH loss
{?1, +1}b?N
x 10
codes from optimal
hash function
5.4
ker?MACcut
5
5.2
lin?MACcut
ker?cut
4.5
codes realizable
by hash functions
5
lin?cut
16
4.8
free codes
4
32
48
4.6
16
32
48
free binary
codes
two-step codes
number of bits b
number of bits b
Figure 2: Panels 1?2: like fig. 1 but showing the value of the error function E(Z) of eq. (7) for the
?free? binary codes, and for the codes produced by the hash functions learned by cut (the two-step
method) and MACcut, with linear and kernel hash functions. Panel 3: illustration of free codes,
two-step codes and optimal codes realizable by a hash function, in the space {?1, +1}b?N .
them being the output of a particular hash function. That is, minimizing (4) without the ?zn =
h(xn )? constraints:
PN
minZ E(Z) = n=1 L(zn , zm ; ynm ),
z1 , . . . , zN ? {?1, +1}b.
(7)
The resulting free codes try to achieve good precision/recall independently of whether a hash function can actually produce such codes. Constraining the codes to be realizable by a specific family of
hash functions (say, linear), means the loss E(Z) will be larger than for free codes. How difficult is
it for a hash function to produce the free codes? Fig. 2 (panels 1?2) plots the loss function for the
free codes, the two-step codes from cut, and the codes from MACcut, for both linear and kernel hash
functions in the same experiment as in fig. 1. It is clear that the free codes have a very low loss E(Z),
which is far from what a kernel function can produce, and even farther from what a linear function
can produce. Both of these are relatively smooth functions that cannot represent the presumably
complex structure of the free codes. This could be improved by using a very flexible hash function
(e.g. using a kernel function with many centers), which could better approximate the free codes, but
1) a very flexible function would likely not generalize well, and 2) we require fast hash functions for
fast retrieval anyway. Given our linear or kernel hash functions, what the two-step cut optimization
does is fit the hash function directly to the free codes. This is not guaranteed to find the best hash
function in terms of the original problem (1), and indeed it produces a pretty suboptimal function. In
contrast, MAC gradually optimizes both the codes and the hash function so they eventually match,
and finds a better hash function for the original problem (although it is still not guaranteed to find
the globally optimal function of problem (1), which is NP-complete).
Fig. 2 (right) shows this conceptually. It shows the space of all possible binary codes, the contours of
E(Z) (green) and the set of codes that can be produced by (say) linear hash functions h (gray), which
is the feasible set {Z ? {?1, +1}b?N : Z = h(X) for linear h}. The two-step codes ?project? the
free codes onto the feasible set, but these are not the codes for the optimal hash function h.
Runtime The runtime per iteration for our 10 000-point training sets with b = 48 bits and ?+ =
100 and ?? = 500 neighbors in a laptop is 2? for both MACcut and MACquad. They stop after 10?
20 iterations. Each iteration is comparable to a single cut or quad run, since the Z step dominates
the computation. The iterations after the first one are faster because they are warm-started.
Comparison with binary hashing methods Fig. 3 shows results on CIFAR and Infinite MNIST.
We create affinities ynm for all methods using the dataset labels as before, with ?+ = 100 similar
neighbors and ?? = 500 dissimilar neighbors. We compare MACquad and MACcut with Two-Step
Hashing (quad) [16], FastHash (cut) [17], Hashing with Kernels (KSH) [19], Iterative Quantization
(ITQ) [11], Binary Reconstructive Embeddings (BRE) [14] and Self-Taught Hashing (STH) [33].
MACquad, MACcut, quad and cut all use the KSH loss function (3). The results show that MACcut
(and MACquad) generally outperform all other methods, often by a large margin, in nearly all situations (dataset, number of bits, size of retrieved set). In particular, MACcut and MACquad are the
only ones to beat ITQ, as long as one uses sufficiently many bits.
4 Discussion
The two-step approach of Two-Step Hashing [16] and FastHash [17] is a significant advance in
finding good codes for binary hashing, but it also causes a maladjustment between the codes and the
7
b = 16 . . . . . . CIFAR . . . . . . b = 64
b = 16 . . . Inf. MNIST . . . b = 64
77
precision
36
32
28
24
500
77
40
40
600
700
k
MACcut
36
MACquad
cut
quad
32
KSH
ITQ
28
BRE
STH
24
800
900 1000 500
74
600
700
k
800
900
1000
74
MACcut
MACquad
71
71
cut
quad
68
68
KSH
ITQ
65
65
BRE
STH
62
62
5000 6000 7000 8000 9000 10000 5000 6000 7000 8000 9000 10000
k
k
Figure 3: Comparison with binary hashing methods on CIFAR (left) and Infinite MNIST (right),
using a linear hash function, using b = 16 to 64 bits (suppl. mat.). Each plot shows the precision for
k retrieved points, for a range of k.
hash function, since the codes were learned without knowledge of what kind of hash function would
use them. Ignoring the interaction between the loss and the hash function limits the quality of the
results. For example, a linear hash function will have a harder time than a nonlinear one at learning
such codes. In our algorithm, this tradeoff is enforced gradually (as ? increases) in the Z step as a
regularization term (eq. (5)): it finds the best codes according to the loss function, but makes sure
they are close to being realizable by the current hash function. Our experiments demonstrate that
significant, consistent gains are achieved in both the loss function value and the precision/recall in
image retrieval over the two-step approach. Note that the objective (5) is not an ad-hoc combination
of a loss over the hash function and a loss over the codes; it follows by applying MAC to the welldefined top-level problem (1), and it solves it in the limit of large ? (up to local optima).
What is the best type of hash function to use? The answer to this is not unique, as it depends on
application-specific factors: quality of the codes produced (to retrieve the correct images), time to
compute the codes on high-dimensional data (since, after all, the reason to use binary hashing is
to speed up retrieval), ease of implementation within a given hardware architecture and software
libraries, etc. Our MAC framework facilitates this choice considerably, because training different
types of hash functions simply involves reusing an existing classification algorithm within the h
step, with no changes to the Z step.
5 Conclusion
We have proposed a general framework for optimizing binary hashing using affinity-based loss functions. It improves over previous, two-step approaches based on learning binary codes first and then
learning the hash function. Instead, it optimizes jointly over the binary codes and the hash function in alternation, so that the binary codes eventually match the hash function, resulting in a better
local optimum of the affinity-based loss. This was possible by introducing auxiliary variables that
conditionally decouple the codes from the hash function, and gradually enforcing the corresponding
constraints. Our framework makes it easy to design an optimization algorithm for a new choice of
loss function or hash function: one simply reuses existing software that optimizes each in isolation.
The resulting algorithm is not much slower than the two-step approach?it is comparable to iterating
the latter a few times?and well worth the improvement in precision/recall.
The step over the hash function is essentially a solved problem if using a classifier, since this can
be learned in an accurate and scalable way using machine learning techniques. The most difficult
and time-consuming part in our approach is the optimization over the binary codes, which is NPcomplete and involves many binary variables and terms in the objective. Although some techniques
exist [16, 17] that produce practical results, designing algorithms that reliably find good local optima
and scale to large training sets is an important topic of future research.
Another direction for future work involves learning more sophisticated hash functions that go beyond mapping image features onto output binary codes using simple classifiers such as SVMs. This
is possible because the optimization over the hash function parameters is confined to the h step and
takes the form of a supervised classification problem, so we can apply an array of techniques from
machine learning and computer vision. For example, it may be possible to learn image features that
work better with hashing than standard features such as SIFT, or to learn transformations of the input
to which the binary codes should be invariant, such as translation, rotation or alignment.
Acknowledgments Work supported by NSF award IIS?1423515.
8
References
[1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high
dimensions. Comm. ACM, 51(1):117?122, Jan. 2008.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, June 2003.
[3] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. IEEE PAMI, 26(9):1124?1137, Sept. 2004.
[4] M. Carreira-Perpi?na? n. The elastic embedding algorithm for dimensionality reduction. ICML, 2010.
[5] M. Carreira-Perpi?na? n and R. Raziperchikolaei. Hashing with binary autoencoders. CVPR, 2015.
[6] M. Carreira-Perpi?na? n and M. Vladymyrov. A fast, universal algorithm to learn parametric nonlinear
embeddings. NIPS, 2015.
[7] M. Carreira-Perpi?na? n and W. Wang. Distributed optimization of deeply nested systems. arXiv:1212.5921
[cs.LG], Dec. 24 2012.
[8] M. Carreira-Perpi?na? n and W. Wang. Distributed optimization of deeply nested systems. AISTATS, 2014.
[9] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. JMLR, 9:1871?1874, Aug. 2008.
[10] T. Ge, K. He, and J. Sun. Graph cuts for supervised binary coding. ECCV, 2014.
[11] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A Procrustean approach to
learning binary codes for large-scale image retrieval. IEEE PAMI, 35(12):2916?2929, Dec. 2013.
[12] K. Grauman and R. Fergus. Learning binary hash codes for large-scale image search. In R. Cipolla,
S. Battiato, and G. Farinella, editors, Machine Learning for Computer Vision, pages 49?87. SpringerVerlag, 2013.
[13] A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, U. Toronto, 2009.
[14] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009.
[15] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing. IEEE PAMI, 34(6):1092?1104, 2012.
[16] G. Lin, C. Shen, D. Suter, and A. van den Hengel. A general two-step approach to learning-based hashing.
ICCV, 2013.
[17] G. Lin, C. Shen, Q. Shi, A. van den Hengel, and D. Suter. Fast supervised hashing with decision trees for
high-dimensional data. CVPR, 2014.
[18] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. ICML, 2011.
[19] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. CVPR, 2012.
[20] G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines using selective sampling.
In L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines, pages
301?320. MIT Press, 2007.
[21] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag, second edition, 2006.
[22] M. Norouzi and D. Fleet. Minimal loss hashing for compact binary codes. ICML, 2011.
[23] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. Int. J. Computer Vision, 42(3):145?175, May 2001.
[24] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290(5500):2323?2326, Dec. 22 2000.
[25] L. J. P. van der Maaten. Barnes-Hut-SNE. In Int. Conf. Learning Representations (ICLR 2013), 2013.
[26] L. J. P. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. JMLR, 9:2579?2605, Nov. 2008.
[27] M. Vladymyrov and M. Carreira-Perpi?na? n. Partial-Hessian strategies for fast learning of nonlinear embeddings. ICML, 2012.
[28] M. Vladymyrov and M. Carreira-Perpi?na? n. Linear-time training of nonlinear low-dimensional embeddings. AISTATS, 2014.
[29] J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for large scale search. IEEE PAMI, 2012.
[30] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. NIPS, 2009.
[31] Z. Yang, J. Peltonen, and S. Kaski. Scalable optimization for neighbor embedding for visualization.
ICML, 2013.
[32] S. X. Yu and J. Shi. Multiclass spectral clustering. ICCV, 2003.
[33] D. Zhang, J. Wang, D. Cai, and J. Lu. Self-taught hashing for fast similarity search. SIGIR, 2010.
[34] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal. Algorithm 778: L-BFGS-B: FORTRAN subroutines for
large-scale bound-constrained optimization. ACM Trans. Mathematical Software, 23(4):550?560, 1997.
9
| 6281 |@word mild:1 kulis:2 version:3 briefly:1 disk:1 perpin:1 tried:1 crucially:1 hsieh:1 solid:1 harder:1 liblinear:2 reduction:6 liu:2 contains:2 initial:1 interestingly:1 outperforms:4 existing:2 current:3 comparing:1 must:2 written:2 concatenate:1 numerical:1 shape:1 designed:1 gist:2 progressively:1 plot:3 hash:97 stationary:1 greedy:1 ith:5 farther:1 provides:1 toronto:1 simpler:1 zhang:1 mathematical:1 welldefined:1 consists:1 prove:1 combine:1 introduce:2 theoretically:1 indeed:3 globally:1 byrd:1 quad:16 decoste:1 considering:1 increasing:2 spain:1 project:2 panel:5 laptop:1 what:5 kind:1 minimizes:1 developed:4 finding:7 transformation:1 runtime:5 grauman:2 classifier:7 k2:1 reuses:1 appear:1 positive:1 before:2 local:5 limit:2 jiang:1 path:1 approximately:2 pami:4 black:1 plus:1 emphasis:2 ucmerced:2 relaxing:2 challenging:1 suggests:1 ease:1 limited:1 range:1 practical:3 unique:1 acknowledgment:1 practice:3 block:7 digit:1 procedure:1 ker:7 jan:1 universal:1 significantly:2 projection:5 regular:1 cannot:2 close:4 onto:2 applying:2 optimize:9 equivalent:1 map:2 demonstrated:1 shi:2 deterministic:1 lagrangian:1 center:1 go:1 regardless:2 starting:3 truncating:2 convex:1 independently:1 shen:2 simplicity:2 sigir:1 rule:1 array:1 retrieve:1 embedding:10 searching:1 anyway:1 coordinate:6 user:1 exact:2 us:5 designing:1 agreement:1 wolfe:1 element:2 approximated:1 merced:2 cut:25 labeled:1 loosli:1 ding:1 solved:2 initializing:1 wang:7 thousand:1 sun:1 deeply:2 mentioned:2 principled:1 comm:1 ideally:1 trained:1 solving:2 rewrite:1 binarizing:3 basis:1 easily:2 differently:1 various:3 represented:1 kolmogorov:1 kaski:1 train:1 fast:9 describe:1 reconstructive:5 query:5 neighborhood:2 quite:3 encoded:1 supplementary:2 valued:1 solve:3 larger:2 say:2 otherwise:1 cvpr:3 niyogi:1 jointly:3 itself:1 indyk:1 obviously:1 hoc:1 doable:1 cai:1 propose:1 interaction:1 product:1 zm:9 relevant:1 loop:1 holistic:1 flexibility:1 achieve:5 roweis:1 kh:1 az:1 convergence:1 optimum:7 darrell:1 produce:10 illustrate:1 gong:1 miguel:1 nearest:6 progress:1 aug:1 eq:5 solves:1 auxiliary:7 c:1 involves:6 implies:1 indicate:1 itq:4 direction:1 correct:1 material:2 require:1 assign:1 fix:1 suffices:1 practically:1 sufficiently:2 considered:2 ground:2 wright:1 exp:2 presumably:1 hut:1 mapping:1 gordo:1 achieves:1 torralba:2 purpose:1 label:8 sensitive:1 create:2 successfully:1 sidestepped:1 minimization:2 mit:1 always:2 rather:5 pn:2 focus:4 june:1 improvement:3 consistently:1 contrast:1 raziperchikolaei:2 realizable:4 posteriori:2 dependent:1 perronnin:1 typically:3 entire:2 kernelized:1 misclassified:2 selective:1 interested:1 subroutine:1 pixel:1 issue:2 classification:5 flexible:2 overall:1 ill:1 constrained:5 art:3 initialize:2 special:1 spatial:1 construct:3 eigenproblem:1 having:2 sampling:1 encouraged:1 identical:3 yu:1 unsupervised:3 nearly:2 icml:5 future:3 nonsmooth:2 np:4 report:4 few:2 belkin:1 suter:2 preserve:3 recognize:1 consisting:1 vladymyrov:3 attempt:2 multiply:1 alignment:1 introduces:2 ynm:17 chain:1 implication:1 accurate:1 partial:1 tree:1 euclidean:1 rotating:1 desired:1 deformation:1 theoretical:1 minimal:3 modeling:1 disadvantage:1 zn:19 cost:2 mac:24 introducing:2 subset:5 hundred:1 krizhevsky:1 successful:1 rounding:2 eigenmaps:3 too:1 optimally:2 answer:1 varies:1 eec:2 considerably:1 combined:1 together:1 continuously:1 na:7 thesis:1 again:1 settled:1 manage:1 satisfied:3 possibly:2 slowly:1 conf:1 derivative:1 reusing:1 account:1 bfgs:2 coding:1 int:2 depends:2 ad:1 later:1 try:4 view:1 doing:1 red:1 competitive:1 start:1 option:1 parallel:1 minimize:3 correspond:1 ztn:2 conceptually:1 generalize:1 handwritten:1 raw:1 iterated:2 norouzi:1 produced:3 lu:2 worth:2 explain:1 inexact:1 evaluates:1 energy:1 involved:1 thereof:1 proof:1 hamming:5 stop:3 gain:4 dataset:11 recall:12 knowledge:1 dimensionality:4 improves:1 schedule:1 bre:4 sophisticated:1 actually:1 appears:1 hashing:36 supervised:15 improved:1 daunting:1 wei:1 done:4 evaluated:1 though:2 just:1 stage:3 until:2 autoencoders:1 web:1 replacing:1 nonlinear:13 overlapping:1 defines:1 quality:2 gray:1 true:2 former:1 hence:10 equality:2 kxn:1 alternating:7 aggressively:1 regularization:1 conditionally:1 visualizing:1 during:2 anm:2 self:2 procrustean:1 complete:5 demonstrate:1 npcomplete:2 image:28 lazebnik:1 recently:3 boykov:1 common:1 rotation:1 qp:1 ji:1 conditioning:1 million:2 he:1 significant:3 composition:1 rd:2 unconstrained:2 canu:1 submodular:1 chapelle:1 similarity:4 longer:1 etc:2 base:1 closest:1 recent:4 retrieved:5 optimizing:11 optimizes:7 driven:1 inf:1 store:1 certain:1 nonconvex:2 verlag:1 meta:1 binary:79 alternation:2 der:2 seen:3 fortunately:1 relaxed:1 additional:1 somewhat:1 converge:2 dashed:1 ii:1 semi:1 full:1 multiple:1 smooth:1 match:3 faster:2 long:1 retrieval:9 lin:14 cifar:6 graphcut:8 award:1 controlled:1 laplacian:3 variant:1 scalable:2 oliva:1 essentially:3 vision:4 arxiv:1 iteration:9 kernel:17 represent:2 suppl:5 achieved:3 confined:1 dec:3 addition:3 want:1 interval:3 crucial:1 extra:1 envelope:1 sure:1 facilitates:2 flow:1 effectiveness:1 call:2 near:1 yang:1 ideal:1 constraining:1 embeddings:12 easy:1 m6:1 iterate:2 fit:5 isolation:2 architecture:1 suboptimal:5 inner:1 reduce:2 tradeoff:1 translates:1 multiclass:1 fleet:1 whether:1 effort:1 penalty:6 proceed:1 cause:2 hessian:1 generally:2 iterating:3 clear:2 involve:1 transforms:1 locally:3 hardware:2 svms:3 outperform:1 exist:3 nsf:1 per:2 rb:2 blue:1 discrete:3 write:1 mat:5 taught:2 group:1 key:1 putting:1 threshold:2 pb:1 changing:1 thresholded:1 nocedal:2 graph:2 relaxation:1 year:1 sum:1 enforced:1 run:3 master:1 distorted:1 family:1 reasonable:2 decision:1 prefer:1 maaten:2 comparable:3 bit:29 bound:1 cyan:1 hi:6 guaranteed:5 layer:1 fan:1 quadratic:10 barnes:1 occur:1 constraint:9 precisely:1 infinity:1 vnm:2 scene:1 software:3 speed:1 argument:1 min:3 kumar:2 attempting:1 relatively:2 according:2 combination:3 describes:1 smaller:1 beneficial:1 sth:3 partitioned:1 lp:9 modification:2 den:2 restricted:1 gradually:4 invariant:2 iccv:2 computationally:1 equation:1 visualization:1 remains:1 discus:3 eventually:4 slack:1 fortran:1 ge:1 end:1 operation:1 apply:7 spectral:4 enforce:1 generic:1 repels:1 robustness:1 slower:2 altogether:1 original:16 top:1 remaining:2 include:1 clustering:1 hinge:1 ramin:1 binarizes:1 objective:23 parametric:1 strategy:1 traditional:1 surrogate:5 affinity:18 gradient:1 iclr:1 distance:10 separate:1 topic:2 binarize:1 reason:1 enforcing:1 assuming:1 code:91 besides:1 length:1 illustration:1 minimizing:5 equivalently:2 difficult:5 mostly:1 lg:1 sne:4 negative:1 rise:1 zmi:2 design:2 implementation:2 zt:1 reliably:1 perform:1 zni:4 vertical:1 datasets:5 minh:3 finite:3 descent:1 beat:1 defining:2 situation:1 incorporated:1 hinton:1 rn:1 arbitrary:1 pair:2 optimized:1 z1:5 california:2 learned:4 barcelona:1 nip:4 trans:1 beyond:2 proceeds:2 below:1 usually:6 xm:5 pattern:2 beating:1 program:1 including:1 memory:2 explanation:1 green:1 max:1 suitable:1 warm:1 regularized:2 zhu:1 improve:1 library:2 started:2 sept:1 review:1 literature:2 loss:57 proportional:1 analogy:1 consistent:1 principle:1 thresholding:1 editor:2 tiny:1 translation:1 eccv:1 supported:1 free:16 neighbor:18 saul:1 correspondingly:2 distributed:2 van:4 boundary:1 dimension:4 xn:19 contour:1 hengel:2 refinement:1 simplified:1 far:1 unm:2 approximate:4 compact:1 nov:1 keep:2 splh:2 global:1 sequentially:1 consuming:1 fergus:2 continuous:10 search:6 iterative:2 why:1 pretty:1 learn:14 elastic:4 decoupling:1 ignoring:2 bottou:2 complex:2 did:1 aistats:2 main:1 arrow:1 big:1 edition:1 nothing:1 x1:4 augmented:1 fig:7 representative:1 join:1 peltonen:1 slow:5 precision:21 theme:1 exponential:1 perpinan:1 jmlr:2 third:1 minz:3 learns:4 magenta:1 perpi:7 specific:2 sift:2 showing:1 kzn:6 list:1 svm:4 dominates:1 mnist:5 quantization:2 sequential:1 andoni:1 magnitude:1 margin:2 locality:1 simply:7 likely:2 chang:4 collectively:1 cipolla:1 springer:1 nested:4 truth:2 corresponds:1 minimizer:6 acm:2 weston:1 goal:2 ksh:19 feasible:2 experimentally:1 change:4 carreira:8 specifically:4 except:3 infinite:3 reducing:1 springerverlag:1 decouple:1 called:1 tsh:3 experimental:1 highdimensional:1 support:2 latter:2 arises:1 dissimilar:2 tested:1 |
5,838 | 6,282 | Stochastic Gradient Geodesic MCMC Methods
?
Chang Liu? , Jun Zhu? , Yang Song??
Dept. of Comp. Sci. & Tech., TNList Lab; Center for Bio-Inspired Computing Research
?
State Key Lab for Intell. Tech. & Systems, Tsinghua University, Beijing, China
?
Dept. of Physics, Tsinghua University, Beijing, China
{chang-li14@mails, dcszj@}.tsinghua.edu.cn; [email protected]
Abstract
We propose two stochastic gradient MCMC methods for sampling from Bayesian
posterior distributions defined on Riemann manifolds with a known geodesic flow,
e.g. hyperspheres. Our methods are the first scalable sampling methods on these
manifolds, with the aid of stochastic gradients. Novel dynamics are conceived
and 2nd-order integrators are developed. By adopting embedding techniques and
the geodesic integrator, the methods do not require a global coordinate system of
the manifold and do not involve inner iterations. Synthetic experiments show the
validity of the method, and its application to the challenging inference for spherical
topic models indicate practical usability and efficiency.
1
Introduction
Dynamics-based Markov Chain Monte Carlo methods (D-MCMCs) are sampling methods using
dynamics simulation for state transition in a Markov chain. They have become a workhorse for
Bayesian inference, with well-known examples like Hamiltonian Monte Carlo (HMC) [22] and
stochastic gradient Langevin dynamics (SGLD) [29]. Here we consider variants for sampling from
distributions defined on Riemann manifolds. Overall, geodesic Monte Carlo (GMC) [7] stands out
for its notable performance on manifolds with known geodesic flow, such as simplex, hypersphere
and Stiefel manifold [26, 16]. Its applicability to manifolds with no global coordinate systems (e.g.
hyperspheres) is enabled by the embedding technique, and its geodesic integrator eliminates inner
(within one step in dynamics simulation) iteration to ensure efficiency. It is also used for efficient
sampling from constraint distributions [17]. Constrained HMC (CHMC) [6] aims at manifolds defined
by a constraint in some Rn . It covers all common manifolds, but inner iteration makes it less appealing.
Other D-MCMCs involving Riemann manifold, e.g. Riemann manifold Langevin dynamics (RMLD)
and Riemann manifold HMC (RMHMC) [13], are invented for better performance but still on the task
of sampling in Euclidean space, where the target variable is treated as the global coordinates of some
distribution manifold. Although they can be used to sample in non-Euclidean Riemann manifolds by
replacing the distribution manifold with the target manifold, a global coordinate system of the target
manifold is required. Moreover, RMHMC suffers from expensive inner iteration.
However, GMC scales undesirably to large datasets, which are becoming common. An effective
strategy to scale up D-MCMCs is by randomly sampling a subset to estimate a noisy but unbiased
stochastic gradient, with stochastic gradient MCMC methods (SG-MCMCs). Welling et al. [29]
pioneered in this direction by developing stochastic gradient Langevin dynamics (SGLD). Chen
et al. [9] apply the idea to HMC with stochastic gradient HMC (SGHMC), where a non-trivial
dynamics with friction has to be conceived. Ding et at. [10] propose stochastic gradient Nos?-Hoover
thermostats (SGNHT) to automatically adapt the friction to the noise by a thermostats. To unify
dynamics used for SG-MCMCs, Ma et al. [19] develop a complete recipe to formulate the dynamics.
?
JZ is the corresponding author; YS is with Department of Computer Science, Stanford University, CA.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: A summary of some D-MCMCs. ?: sampling on manifold not supported; ?: The integrators
are not in the SSI scheme (It is unclear whether the claimed ?2nd-order? is equivalent to ours); ?:
2nd-order integrators for SGHMC and mSGNHT are developed by [8] and [18], respectively.
methods
GMC [7]
RMLD [13]
RMHMC [13]
CHMC [6]
SGLD [29]
SGHMC [9] / SGNHT [10]
SGRLD [23] / SGRHMC [19]
SGGMC / gSGNHT (proposed)
stochastic
gradient
?
?
?
?
?
no inner
iteration
?
?
?
?
?
?
?
?
?
?
?
no global
coordinates
?
?
?
?
?
?
?
?
order of
integrator
2nd
1st
2nd?
2nd?
1st
1st?
1st
2nd
In this paper, we present two SG-MCMCs for manifolds with known geodesic flow: stochastic
gradient geodesic Monte Carlo (SGGMC) and geodesic stochastic gradient Nos?-Hoover thermostats
(gSGNHT). They are the first scalable sampling methods on manifolds with known geodesic flow and
no global coordinate systems. We use the recipe [19] to tackle the non-trivial dynamics conceiving
task. Our novel dynamics are also suitable for developing 2nd-order integrators by adopting the
symmetric splitting integrator (SSI) [8] scheme. A key property of a Kth-order integrator is the
bias of the expected sample average at iteration L can be upper bounded by L?K/(K+1) and the
mean square error by L?2K/(2K+1) [8], so a higher order integrator basically performs better. Our
integrators also incorporate the geodesic integrator to avoid inner iteration. Our methods can also be
used to scalably sample from constraint distributions [17] like GMC.
There exist other SG-MCMCs on Riemann manifold, e.g. SGRLD [23] and SGRHMC [19], stochastic
gradient versions of RMLD and RMHMC respectively. But they also require the Riemann manifold to have a global coordinate system, like their original versions as is mentioned above. So
basically they cannot draw samples from hyperspheres, while our methods are capable. Technically,
SGRLD/SGRHMC (and RMLD/RMHMC) samples in the coordinate space, so we need a global one
to make it valid. The explicit use of the Riemann metric tensor also makes the methods more difficult
to implement. Our methods (and GMC) sample in the isometrically embedded space, where the
whole manifold is represented and the Riemann metric tensor is implicitly embodied by the isometric
embedding. Moreover, our integrators are of a higher order. Tab. 1 summarizes the key properties of
aforementioned D-MCMCs, where our advantages are clearly shown.
Finally, we apply our samplers to perform inference for spherical admixture models (SAM) [24].
SAM defines a hierarchical generative process to describe the data that are expressed as unit vectors
(i.e., elements on the hypersphere). The task of posterior inference is to identify a set of latent topics,
which are also unit vectors. This process is highly challenging due to a non-conjugate structure and
the strict manifold constraints. None of the existing MCMC methods is both applicable to the task
and scalable. We demonstrate that our methods are the most efficient methods to learn SAM on large
datasets, with a good performance on testing data perplexity.
2
Preliminaries
We briefly review the basics of SG-MCMCs. Consider a Bayesian model with latent variable q, prior
?0 (q) and likelihood ?(x|q). Given a dataset D = {xd }D
d=1 , sampling from the posterior ?(q|D)
by D-MCMCs requires computing the gradient of potential energy ?U (q) , ?? log ?(q|D) =
PD
?? log ?0 (q) ? d=1 ? log ?(xd |q), which is linear to data size D thus not scalable. SG-MCMCs
? (q) ,
address this challenge by randomly drawing a subset S of D to build the stochastic gradient ?q U
P
D
??q log ?0 (q) ? |S| x?S ?q log ?(x|q), a noisy but unbiased estimate.Under the i.i.d. assumption
of D, the central limit theorem holds: in the sense of convergence in distribution for large D,
? (q) = ?q U (q) + N (0, V (q)),
?q U
(1)
where we use N (?, ?) to denote a Gaussian random variable and V (q) is some covariance matrix.
The gradient noise raises challenging restrictions to the SG-MCMC dynamics. Ma et al. [19]
then provide a recipe to construct correct dynamics. It claims that for a random variable z, given
a Hamiltonian H(z), a skew-symmetric matrix (curl matrix) Q(z) and a positive definite matrix
(diffusion matrix) D(z), the dynamics defined by the following stochastic differential equation (SDE)
2
dz = f (z)dt +
p
2D(z)dW (t)
(2)
has the unique stationary distribution ?(z) ? exp{?H(z)}, where W (t) is the Wiener process and
X ?
f (z) = ? [D(z) + Q(z)] ?z H(z) + ?(z), ?i (z) =
(Dij (z) + Qij (z)) .
(3)
?zj
j
The above dynamics is compatible with stochastic gradient. For SG-MCMCs, z is usually an
augmentation of the target variable q, and the Hamiltonian usually follows the form H(z) = T (z) +
?
U (q). Referring to Eqn. (1), ?q H(z)
= ?q H(z) + N (0, V (q)) and f?(z) = f (z) + N (0, B(z)),
?
where B(z) is the covariance matrix of the Gaussian noise passed from ?z H(z)
to f?(z) through
Eqn. (3). We informally rewrite dW (t) as N (0, dt) and express dynamics Eqn. (2) as
dz =f (z)dt + N (0, 2D(z)dt) = f (z)dt + N (0, B(z)dt2 ) + N 0, 2D(z)dt ? B(z)dt2
=f?(z)dt + N 0, 2D(z)dt ? B(z)dt2 .
(4)
This tells us that the same dynamics can be exactly expressed by stochastic gradient. Moreover, the
recipe is complete: for any continuous Markov process defined by Eqn. (2) with a unique stationary
distribution ?(z) ? exp{?H(z)}, there exists a skew-symmetric matrix Q(z) so that Eqn. (3) holds.
3
Stochastic Gradient Geodesic MCMC Methods
We now formally develop our SGGMC and gSGNHT. We will describe the task settings, develop the
dynamics, and show how to simulate by 2nd-order integrators and stochastic gradient.
?
?
3.1 Technical Descriptions of the Settings
?
? ?
We first describe a Riemann manifold. Main concepts
?
?
are depicted in Fig. 1. Let M be an m-dim Riemann
manifold, which is covered by a set of local coordi?
?
nate systems. Denote one of them by (N , ?), where
N ? M is an open subset, and ? : N ? ?, Q 7? q
?
?
?? ? = 3
with ? , ?(N ) ? Rm , Q ? N and q ? ? is a
?? ? = 2
homeomorphism. Additionally, transition mappings Figure 1: An illustration of manifold M
between any two intersecting local coordinate systems with local coordinate system (N , ?) and
are required to be smooth. Denote the Riemann metric embedding ?. See text for details.
tensor under (N , ?) by G(q), an m ? m symmetric
positive-definite matrix. Another way to describe M is through embedding ? a diffeomorphism
? : M ? ?(M) ? Rn (n ? m). In (N , ?), ? can be embodied by a more sensible mapping
? , ? ? ??1 : Rm ? Rn , q 7? x, which links the coordinate space and the embedded space. For
convenience, we only consider isometric embeddings (whose existence is guaranteed [21]): ? such
Pn
??l (q)
that G(q)ij = l=1 ???ql (q)
?qj , 1 ? i, j ? m holds for any local coordinate system. Common
i
manifolds are subsets of some Rn , in which case the identity mapping (as ?) from Rn (where M is
defined) to Rn (the embedded space) is isometric.
To define a distribution on a Riemann manifold, from which we want to sample, we need a measure.
In the coordinate space Rm , ? naturally possesses the Lebesgue measure ?m (dq), and the probability
density can be defined in ?, which we denote as ?(q). In the embedded space Rn , ?(N ) naturally
possesses the Hausdorff measure Hm (dx), and we denote the probability p
density w.r.t this measure
as ?H (x). The relation between them can be found by ?H (?(q)) = ?(q)/ |G(q)|.
3.2
The Dynamics
We now construct our dynamics using the recipe [19] so that our dynamics naturally have the desired
stationary distribution, leading to correct samples. It is important to note that the recipe only suits for
dynamics in a Euclidean space. So we can only develop the dynamics in the coordinate space but
not in the embedded space ?(M), which is generally not Euclidean. However it is advantageous to
simulate the dynamics in the embedded space (See Sec. 3.3).
Dynamics for SGGMC Define the momentum in the coordinate space p ? Rm and the augmented
variable z = (q, p) ? R2m . Define the Hamiltonian 2 H(z) = U (q) + 12 log |G(q)| + 21 p> G(q)?1 p,
2
Another derivation of the momentum and the Hamiltonian originated from physics in both coordinate and
embedded spaces is provided in Appendix C.
3
where U (q) , ? log ?(q). We define the Hamiltonian so that the canonical distribution ?(z) ?
exp{?H(z)} marginalized w.r.t p recovers the target distribution ?(q). For a symmetric positive
definite n ? n matrix C, define
D(z) and thecurl matrixQ(z) as
the diffusion matrix
0
0
0 ?I
, Q(z) =
,
D(z) =
I 0
0 M (q)>CM (q)
where
we define M (q)n?m : M (q)ij = ??i (q)/?qj . So from Eqn. (2, 3), the dynamics
?
? dq =G?1 pdt
? dp = ? ?q U dt ? 1 ?q log |G|dt ? M >CM G?1 p dt ? 1 ?q p> G?1 p dt + N (0, 2M >CM dt)
2
2
(5)
has a unique stationary distribution ?(z) ? exp{?H(z)}.
Dynamics for gSGNHT Define z = (q, p, ?) ? R2m+1 , where ? ? R is the thermostats. For a
2
positive C ? R, define the Hamiltonian H(z) = U (q) + 21 log |G(q)| + 12 p> G(q)?1 p + m
2 (? ? C) ,
whose marginalized canonical distribution is ?(q) as desired.
Define
D(z)
and
Q(z)
as
?
?
!
0
?I
0
0
0
0
0
p/m ? ,
0 CG(q) 0 , Q(z) = ? I
D(z) =
0
0
0
0 ?p> /m
0
Then by Eqn. (2, 3) the proper dynamics of gSGNHT is
?
dq =G?1 pdt
?
?
?
?
?
1
1
dp = ? ?q U dt ? ?q log |G|dt ? ?p dt ? ?q p> G?1 p dt + N (0, 2CGdt) .
2
2
?
?
?
?
? d? =( 1 p> G?1 p ? 1)dt
m
(6)
These two dynamics are novel. They are extensions of the dynamics of SGHMC and SGNHT to
Riemann manifolds, respectively. Conceiving the dynamics in this form is also intended for the
convenience to develop 2nd-order geodesic integrators, which differs from SGRHMC.
3.3
Simulation with 2nd-order Geodesic Integrators
In this part we develop our integrators by following the symmetric splitting integrator (SSI) scheme [8],
which is guaranteed to be of 2nd-order. The idea of SSI is to first split the dynamics into parts with
each analytically solvable, then alternately simulate each exactly with the analytic solutions. Although
also SSI, the integrator of GMC does not fit our dynamics where diffusion arises. But we adopt its
embedding technique to get rid of any local coordinate system thus release the global coordinate
system requirement. So we will solve and simulate the split dynamics in the isometrically embedded
space, where everything is expressed by the position x = ?(q) and the velocity v = x? (which is
actually the momentum in the isometrically embedded space, see Appendix C; the overhead dot
means time derivative), instead of q and p.
Integrator for SGGMC
We first split dynamics (5) into sub-SDEs with each analytically solvable:
?
?
dq =0
?
(
?1
?
dq
=G
pdt
?
?
dq =0
1
A:
, O : dp =??q U (q)dt? ?q log|G(q)|dt .
,B:
?dp =? 1 ?q p>G?1p dt
?
2
dp =?M>CMG?1pdt
?
?
2
+ N (0, 2M>CM dt)
As noted in GMC, the solution of dynamics A is the geodesic flow of the manifold [1]. Intuitively,
dynamics A describes motion with no force so a particle moves freely on the manifold, e.g. the
uniform motion in Euclidean space, and motion along great circles (velocity rotating with varying
tangents along the trajectory) on hypersphere Sd?1 , {x ? Rd |kxk = 1} (k ? k denotes `2 -norm).
The evolution of the position and velocity of this kind is the geodesic flow. We require an explicit
form of the geodesic flow in the embedded space. For Sd?1 ,
(
x(t) = x(0) cos(?t) + v(0)/? sin(?t)
(7)
v(t) = ??x(0) sin(?t) + v(0) cos(?t)
4
is the geodesic flow expressed by the embedded variables x and v, where ? = kv(0)k.
By details
in [7] or Appendix A, dynamics (
B and O are solved as
(
x(t) =x(0)
x(t) =x(0)
,
B:
,O:
v(t) =expm ?? x(0) Ct v(0)
v(t) =v(0)+? x(0) ??x UH x(0) t+N (0, 2Ct)
where UH (x) , ? log ?H (x), expm{?} is the matrix exponent, and ?(x) is the projection onto the
tangent space at x in the embedded manifold. For Rn , ?(x) = In (the identity mapping in Rn ) and
for Sn?1 embedded in Rn , ?(x) = In ? xx> (see Appendix A.3).
We further reduce dynamics B for scalar C: v(t) = ?(x(0)) exp{?Ct}v(0) = exp{?Ct}v(0), by
noting that exp{?Ct} is a scalar and v(0) already lies on the tangent space at x(0). To illustrate this
form, we expand the exponent for small t and get v(t) = (1 ? Ct)v(0), which is exactly the action
of a friction dissipating energy to control injected noise, as proposed in SGHMC. Our investigation
reveals that this form holds generally for v as the momentum in the isometrically embedded space,
but not the usual momentum p in the coordinate space. In SGHMC, v and p are undistinguishable,
but in our case v can only lie in the tangent space and p is arbitrary in Rm .
Integrator for gSGNHT We split dynamics (6) in a similar way:
?
?
dq =0
dq =G?1 pdt
?
?
?
?
?
?
?
?
?
dq
=0
1
?
?
?
1 > ?1
dp =??q U dt? ?q log |G| dt
dp
=?
?
p
G
p
dt
q
A:
, B : dp =??p dt , O :
.
2
2
?
?
?
?
?
1
+N
(0,
2CGdt)
?
?
d?
=0
?
?
?
? d? =
p>G?1 p?1 dt
d? =0
m
For dynamics A, the solution of q and p is again the geodesic flow. To solve ?, we first figure out that
>
>
>
d
for dynamics A, p> G?1 p is constant: dt
p G(q)?1 p = ?q p> G(q)?1 p q+2
?
G(q)?1 p p? =
?2p?> q? + 2q?> p? = 0. Alternatively we note that 12 p> G?1 p = 12 v > v is the kinetic energy 3 conserved
1
v(0)> v(0) ? 1 t.
by motion with no force. Now the evolution of ? can be solved as ?(t) = ?(0) + m
Dynamics O is identical to the one of SGGMC. Dynamics B can be solved similarly with only
v updated: v(t) = exp{??(0)t}v(0). Expansion of this recovers the dissipation of energy by an
adaptive friction proposed by SGNHT, and we extend it to an embedded space.
Now we consider incorporating stochastic gradient. Only the common dynamics O is affected.
?H (x) = ?x UH (x) + N (0, V (x)),
Similar to Eqn. (1), we express the stochastic gradient as ?x U
then reformulate the solution of dynamics O as
h
i
?H x(0) t + N 0, 2Ct?V x(0) t2 .
v(t) = v(0) + ? x(0) ? ??x U
(8)
To estimate the usually unknown V (x), a simple way is just to take it as zero, in the sense that V (x)t2
is a higher order infinitesimal of 2Ct for t as a small simulation step size. Another way to estimate
V (x) is by the empirical Fisher information, as is done in [2].
Finally, as SSI suggests, we simulate the complete dynamics by exactly simulating these solutions
alternately in an ?ABOBA? pattern. For a time step size of ?, dynamics A and B advance by ?/2 for
once and dynamics O by ?. As other SG-MCMCs, we omit the unscalable Metropolis-Hastings test.
But the consistency is still guaranteed [8] of e.g. the estimation by averaging over samples drawn
from SG-MCMCs. Algorithms of SGGMC and gSGNHT are listed in Appendix E.
4
Application to Spherical Admixture Model
We now apply SGGMC/gSGNHT to solve the challenging task of posterior inference in Spherical
Admixture Model (SAM) [24]. SAM is a Bayesian topic model for spherical data (each datum is in
some Sd?1 ), such as the tf-idf representation of text data. It enables more feature representations for
hierarchical Bayesian models, and have the benefit over Latent Dirichlet Allocation (LDA) [5] to
directly model the absence of words. The structure of SAM is shown in Fig. 2. Each document vd ,
each topic ?k , the corpus mean ? and the hyper-parameter m are all in SV ?1 with V the vocabulary
size. Each topic proportion ?d is in (K ? 1)-dim simplex with K the number of topics.
3 >
p G?1 p = (G?1 p)> G(G?1 p) = q?> (M > M )q? = (M q)
? > (M q)
? = v > v for an isometric embedding.
5
SAM uses the von Mises-Fisher distribution (vMF) (see e.g. [20]) to model variables
on hyperspheres. The vMF on Sd?1 with mean ? ? Sd?1 and concentration param>
eter ? ? R+ has pdf (w.r.t the Hausdorff
measure) vMF(x|?, ?) = cd (?) exp{?? x},
d/2?1
d/2
where cd (?) = ?
/ (2?) Id/2?1 (?) and Ir (?) denotes the modified Bessel function of the first kind and order r.
Then the generating process of SAM is
? Draw ? ? vMF(?|m, ?0 );
? For k = 1, . . . , K, draw topic ?k ? vMF(?k |?, ?);
? For d = 1, . . . , D, draw ?d ? Dir(?d |?) and vd ?
vMF(vd |?
v (?, ?d ), ?),
?, ?0
?
?
??
?
?
?
??
??
?
??d
where v?(?, ?d ) , k??
with ? , (?1 , . . . , ?K ) is an approxi- Figure 2: An illustration of SAM
dk
mate spherical weighted mean of topics. The joint distribution model structure.
of v , (v1 , . . . , vD ), ?, ? , (?1 , . . . , ?K ), ? can be known.
The inference task is to estimate the topic posterior ?(?|v). As it is intractable, [24] provides a meanfield variational inference method and solves an optimization problem under spherical constraint,
which is tackled by repeatedly normalizing. However, this treatment is not applicable to most
sampling methods since it may corrupt the distribution of the samples. [24] tries a simple adaptive
Metropolis-Hastings sampler with undesirable results, and no more attempt of sampling methods
appears. Due to the deficiency of global coordinate system of hypersphere, most Riemann manifold
samplers including SGRLD and SGRHMC fail. To our knowledge, only CHMC and GMC are
suitable, yet not scalable. Our samplers are appropriate for the task, with the advantage of scalability.
Now we present our inference method that uses SGGMC/gSGNHT to directly sample from ?(?|v).
First we note that ? can be collapsed analytically and the marginalized distribution of (v, ?, ?) is:
D
Y
?1
?(v, ?, ?) = cV (?0 )cV (?)K cV (km(?)k)
?
Dir(?d |?)vMF(vd |?
v (?, ?d ), ?),
(9)
d=1
PK
where m(?)
?
, ?0 m + ? k=1 ?k . To sample from ?(?|v) using our samplers, we only need to
know a stochastic estimate of the gradient of potential energy ?? U (?|v) , ??? log ?(?|v), which
can be estimated by adopting the technique used in [11]: ?? log ?(?|v) =
Z
Z
1
?(?, ?|v) ?? ?(?, ?|v)
?? ?(?, ?|v)d? =
d? = E?(?|?,v) [?? log ?(?, ?|v)] ,
?(?|v)
?(?|v) ?(?, ?|v)
where ?? log ?(?, ?|v) = ?? log ?(v, ?, ?) is known, and the expectation can be estimated by averPN
1
(n)
aging over a set of samples {?(n) }N
).
n=1 from ?(?|v, ?): ?? U (?|v) ? N
n=1 ?? log ?(v, ?, ?
(n) N
To draw {? }n=1 , noting the simplex constraint and that the target distribution ?(?|v, ?) is known
up to a constant multiplier, we use GMC to do the task.
To scale up, we use a subset {d(s)}Ss=1 of indices of randomly chosen items from the whole data set
to get a stochastic estimate for each ?? log ?(v, ?, ?(n) ). The final stochastic gradient is:
N
S
D XX >
(n)
? (?|v) ? ?? log cV (km(?)k)
?? U
?
??
v v?(?, ?d(s) ).
(10)
N S n=1 s=1 d(s)
The inference algorithm for SAM by SGGMC/gSGNHT is summarized in Alg. 3 in Appendix E.
5
Experiments
We present empirical results on both synthetic and real datasets to prove the accuracy and efficiency
of our methods. All target densities are expressed in the embedded space w.r.t the Hausdorff measure
so we omit the subscript ?H?. Synthetic experiments are only for SGGMC since the advantage to use
thermostats has been shown by [10] and the effectiveness of gSGNHT is presented on real datasets.
Detailed settings of the experiments are provided in Appendix F.
5.1
Toy Experiment
We first present the utility and check the correctness of SGGMC by a greenhouse experiment with
known stochastic gradient noise. Consider sampling from a circle (S1 ) for easy visualization. We set
>
the target distribution such that the potential energy is U (x) = ? log exp{5?>
1 x} + 2 exp{5?2 x} ,
where x, ?1 , ?2 ? S1 and ?1 = ??2 = ?3 (angle from +x direction). The stochastic gradient is
6
5
5
true
GMC
SGGMC
4.5
4
4
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
true
GMC
SGGMC
4.5
0
?0.6
?0.6
?0.4
?0.4
?0.2
?0.2
0
0
0.2
0.2
0.4
0.4
0.6
?0.6
(a) ?(v1 |D)
?0.4
?0.2
0
0.2
0.4
0.6
?0.5
0.6
0
0.5
(b) ?(v2 |D)
?0.4
?0.2
0
0.2
0.4
(c) ?(v1 , v2 |D)
Figure 4: (a-b): True and empirical densities for ?(v1 |D) and ?(v2 |D). (c) True (left) and empirical
by SGGMC (right) densities for ?(v1 , v2 |D).
produced by corrupting with N (0, 1000I), whose variance is used as V (x) in Eqn. (8) for sampling.
Fig. 3(a) shows 100 samples from SGGMC and empirical distribution of 10,000 samples in the
embedded space R2 . True and empirical distributions are compared in Fig. 3(b) in angle space (local
coordinate space). We see no obvious corruption of the result when using stochastic gradient.
5.2
Synthetic Experiment
90
2
120
0.7
60
empirical distribution
samples from SGGMC
true
GMC
SGGMC
1.5
0.6
1
150
30
0.5
0.5
0.4
180
0
?(?)
It should be stressed that although it is possible to apply scalable methods like SGRLD in
spherical coordinate systems (almost global
ones), it is too troublesome to work out the
form of e.g. Riemann metric tensor, and special treatments like reflection at boundaries
have to be considered. Numerical instability
at boundaries also tends to appear. All these
will get even worse in higher dimensions. Our
methods work in embedded spaces, so all
these issues are bypassed and can be elegantly
extended to high dimensions.
0.3
0.2
210
330
0.1
240
300
270
0
?4
?3
?2
?1
0
?
1
2
3
4
(a) samples by SGGMC in (b) distribution comparison in
the embedded space
angle space
Figure 3: Toy experiment results: (a) samples and
empirical distribution of SGGMC; (b) comparison
of true and empirical distributions.
We then test SGGMC on a simple Bayesian posterior estimation task. We adopt a model with similar
structure as the one used in [29]. Consider a mixture model of two vMFs on S1 with equal weights:
?(v1 )= vMF(v1 |e1 ,?1 ), ?(v2 )= vMF(v2 |e1 ,?2 ), ?(xi |v1 ,v2 ) ? vMF(xi |v1 ,?x ) + vMF(xi |?,?x ),
where e1 = (1, 0) and ? , (v1 + v2 )/kv1 + v2 k. The task is to infer the posterior ?(v1 , v2 |D), where
?
D = {xi }D=100
is our synthetic data that is generated from the likelihood with v1 = ? 24
, v2 = ?8
i=1
and ?1 = ?2 = ?x = 20 by GMC. SGGMC uses empirical Fisher information in the way of [2]
for V (x) in Eqn. (8), and uses 10 for batch size. Fig. 4(a-b) show the true and empirical marginal
posteriors of v1 and v2 , and Fig. 4(c) presents empirical joint posterior by samples from SGGMC
and its true density. We see that samples from SGGMC exhibit no observable corruption when a
mini-batch is used, and fully explore the two modes and the strong correlation of v1 and v2 . 4
5.3 Spherical Admixture Models
Setups For baselines, we compare with the mean-field variational inference (VI) by [24] and its
stochastic version (StoVI) based on [15], as well as GMC methods. It is problematic for GMC to
directly sample from the target distribution ?(?|v) since the potential energy is hard to estimate, which
is required for Metropolis-Hastings (MH) test in GMC. An approximate Monte Carlo estimation is
provided in Appendix B and the corresponding method for SAM is GMC-apprMH. An alternative
is GMC-bGibbs, which adopts blockwise Gibbs sampling to alternately sample from ?(?|?, v) and
?(?|?, v) (both known up to a constant multiplier) using GMC.
We evaluate the methods by log-perplexity ? the average of negative log-likelihood on a held-out
test set Dtest . Variational methods produce a single point estimate ?? and the log-perplexity is
P
1
(m) M
?
log-perp = ? |Dtest
}m=1
d?Dtest log ?(vd |?). Sampling methods draw a set of samples {?
|
P
P
M
1
1
(m)
and log-perp = ? |Dtest | d?Dtest log( M m=1 ?(vd |? )). In both cases the intractable ?(vd |?)
R
needs to be estimated. By noting that ?(vd |?) = ?(vd , ?d |?)d?d = E?(?d |?) [?(vd |?, ?d )], we
4
Appendix D provides a rationale on the shape of the joint posterior.
7
(n)
estimate it by averaging ?(vd |?, ?d ) (exactly known from the generating process) over samples
(n)
{?d }N
n=1 drawn from ?(?d |?) = ?(?d ) = Dir(?), the prior of ?d . The log-perplexity is not
comparable among different models so we exclude LDA from our baseline.
We show the performance of all methods on a small and a large dataset. Hyper-parameters of
SAM are fixed while training and set the same for all methods. V (x) in Eqn. (8) is taken zero
for SGGMC/gSGNHT. All sampling methods are implemented 5 in C++ and fairly parallelized
by OpenMP. VI/StoVI are run in MATLAB codes by [24] and we only use their final scores for
comparison. Appendix F gives further implementation details, including techniques to avoid overflow.
5500
5000
5000
4500
4500
4000
log?perplexity
4000
log?perplexity
On the small dataset
The small dataset is the
20News-different dataset
used by [24], which consists of 3 categories from
20Newsgroups dataset. It
is small (1,666 training and
1,107 test documents) so
we have the chance to see
the eventual results of all
methods. We use 20 topics
and 50 as the batch size.
3500
3000
2500
2000
1500
1000
2
10
VI
StoVI
GMC?apprMH
GMC?bGibbs
SGGMC?batch
SGGMC?full
gSGNHT?batch
gSGNHT?full
3500
3000
2500
3
4
10
10
wall time in seconds (log scale)
(a) 20News-different
5
10
2000
VI
StoVI
GMC?apprMH
GMC?bGibbs
SGGMC?batch
SGGMC?full
gSGNHT?batch
gSGNHT?full
3
10
4
10
wall time in seconds (log scale)
5
10
(b) 150K Wikipedia
Figure 5: Evolution of log-perplexity along wall time of all methods
Fig. 5(a) shows the perfor- on (a) 20News-different dataset and (b) 150K Wikipedia subset.
mance of all methods. We
can see that our SGGMC and gSGNHT perform better than others. VI converges swiftly but cannot
go any lower due to the intrinsic gap between the mean-field variational distribution and the true
posterior. StoVI converges slower than VI in this small scale case, and exhibits the same limit.
All sampling methods eventually go below variational methods, and ours go the lowest. gSGNHT
shows its benefit to outperform SGGMC under the same setting. For our methods, an appropriately
smaller batch size achieves a better result due to the speed-up by subsampling. Note that even the
full-batch SGGMC and gSGNHT outperform GMC variants. This may be due to the randomness in
the dynamics helps jumping out of one local mode to another for a better exploration.
On the large dataset For the large dataset, we use a subset of the Wikipedia dataset with 150K
training and 1K test documents, to challenge the scalability of all the methods. We use 50 topics and
100 as the batch size. Fig. 5(b) shows the outcome. We see that the gap between our methods and
other baselines gets larger, indicating our scalability. Bounded curves of VI/StoVI, the advantage of
using thermostats and subsampling speed-up appear again. Our full-batch versions are still better than
GMC variants. GMC-apprMH and GMC-bGibbs scale badly; they converge slowly in this case.
6
Conclusions and Discussions
We propose SGGMC and gSGNHT, SG-MCMCs for scalable sampling from manifolds with known
geodesic flow. They are saliently efficient on their applications. Novel dynamics are constructed and
2nd-order geodesic integrators are developed. We apply the methods to SAM topic model for more
accurate and scalable inference. Synthetic experiments verify the validity and experiments for SAM
on real-world data shows an obvious advantage in accuracy over variational inference methods and
in scalability over other applicable sampling methods. There remains possible broad applications
of our methods, including models involving vMF (e.g. mixture of vMF [4, 14, 28], DP mixture of
vMF [12, 3, 27]), constraint distributions [17] (e.g. truncated Gaussian), and distributions on Stiefel
manifold (e.g. Bayesian matrix completion [25]), where the ability of scale-up will be appealing.
Acknowledgments
The work was supported by the National Basic Research Program (973 Program) of China (No.
2013CB329403), National NSF of China Projects (Nos. 61620106010, 61322308, 61332007), the
Youth Top-notch Talent Support Program, and Tsinghua Initiative Scientific Research Program (No.
20141080934).
5
All the codes and data can be found at http://ml.cs.tsinghua.edu.cn/~changliu/sggmcmc-sam/.
8
References
[1] Ralph Abraham, Jerrold E Marsden, and Jerrold E Marsden. Foundations of mechanics. Benjamin/Cummings Publishing Company
Reading, Massachusetts, 1978.
[2] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. arXiv preprint
arXiv:1206.6380, 2012.
[3] Nguyen Kim Anh, Nguyen The Tam, and Ngo Van Linh. Document clustering using dirichlet process mixture model of von mises-fisher
distributions. In The 4th International Symposium on Information and Communication Technology, SoICT 2013, page 131?138, 2013.
[4] Arindam Banerjee, Inderjit S Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the unit hypersphere using von mises-fisher
distributions. Journal of Machine Learning Research, 6:1345?1382, 2005.
[5] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993?
1022, 2003.
[6] Marcus A. Brubaker, Mathieu Salzmann, and Raquel Urtasun. A family of mcmc methods on implicitly defined manifolds. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 161?172, 2012.
[7] Simon Byrne and Mark Girolami. Geodesic monte carlo on embedded manifolds. Scandinavian Journal of Statistics, 40(4):825?845,
2013.
[8] Changyou Chen, Nan Ding, and Lawrence Carin. On the convergence of stochastic gradient mcmc algorithms with high-order integrators.
In Advances in Neural Information Processing Systems, pages 2269?2277, 2015.
[9] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In Proceedings of the 31st International
Conference on Machine Learning (ICML-14), pages 1683?1691, 2014.
[10] Nan Ding, Youhan Fang, Ryan Babbush, Changyou Chen, Robert D. Skeel, and Hartmut Neven. Bayesian sampling using stochastic
gradient thermostats. In Advances in Neural Information Processing Systems, pages 3203?3211, 2014.
[11] Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic mcmc. arXiv preprint arXiv:1506.04557,
2015.
[12] Kaushik Ghosh, Rao Jammalamadaka, and Ram C. Tiwari. Semiparametric bayesian techniques for problems in circular data. Journal
of Applied Statistics, 30(2):145?161, 2003.
[13] Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 73(2):123?214, 2011.
[14] Siddharth Gopal and Yiming Yang. Von mises-fisher clustering models. In Proceedings of the 31st International Conference on Machine
Learning (ICML-14), 2014.
[15] Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning
Research, 14(1):1303?1347, 2013.
[16] I. M. James. The topology of Stiefel manifolds, volume 24. Cambridge University Press, 1976.
[17] Shiwei Lan, Bo Zhou, and Babak Shahbaba. Spherical hamiltonian monte carlo for constrained target distributions. In Proceedings of
the 31st International Conference on Machine Learning (ICML-14), pages 629?637, 2014.
[18] Chunyuan Li, Changyou Chen, Kai Fan, and Lawrence Carin. High-order stochastic gradient thermostats for bayesian learning of deep
models. arXiv preprint arXiv:1512.07662, 2015.
[19] Yi-An Ma, Tianqi Chen, and Emily Fox. A complete recipe for stochastic gradient mcmc. In Advances in Neural Information Processing
Systems, pages 2899?2907, 2015.
[20] Kanti V. Mardia and Peter E. Jupp. Distributions on spheres. Directional Statistics, pages 159?192, 2000.
[21] John Nash. The imbedding problem for riemannian manifolds. Annals of Mathematics, pages 20?63, 1956.
[22] Radford M. Neal. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2, 2011.
[23] Sam Patterson and Yee Whye Teh. Stochastic gradient riemannian langevin dynamics on the probability simplex. In Advances in Neural
Information Processing Systems, pages 3102?3110, 2013.
[24] Joseph Reisinger, Austin Waters, Bryan Silverthorn, and Raymond J. Mooney. Spherical topic models. In Proceedings of the 27th
International Conference on Machine Learning (ICML-10), pages 903?910, 2010.
[25] Yang Song and Jun Zhu. Bayesian matrix completion via adaptive relaxed spectral regularization. In The 30th AAAI Conference on
Artificial Intelligence (AAAI-16), 2016.
[26] Eduard L. Stiefel. Richtungsfelder und fernparallelismus in n-dimensionalen mannigfaltigkeiten. Commentarii Mathematici Helvetici,
8(1):305?353, 1935.
[27] Julian Straub, Jason Chang, Oren Freifeld, and John W. Fisher III. A dirichlet process mixture model for spherical data. In Proceedings
of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 930?938, 2015.
[28] Jalil Taghia, Zhanyu Ma, and Arne Leijon. Bayesian estimation of the von mises-fisher mixture model with variational inference. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 36(9):1701?1715, 2014.
[29] Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International
Conference on Machine Learning (ICML-11), pages 681?688, 2011.
9
| 6282 |@word briefly:1 version:4 changyou:3 advantageous:1 norm:1 nd:13 proportion:1 open:1 scalably:1 km:2 simulation:4 covariance:2 tnlist:1 liu:1 series:1 score:1 salzmann:1 ours:2 document:4 existing:1 jupp:1 yet:1 dx:1 john:3 numerical:1 shape:1 analytic:1 sdes:1 enables:1 kv1:1 stationary:4 generative:2 intelligence:4 item:1 hamiltonian:11 blei:2 hypersphere:5 provides:2 zhang:1 along:3 constructed:1 become:1 differential:1 symposium:1 initiative:1 qij:1 prove:1 consists:1 doubly:1 overhead:1 expected:1 kanti:1 mechanic:1 integrator:23 inspired:1 spherical:12 riemann:18 automatically:1 company:1 siddharth:1 param:1 spain:1 provided:3 moreover:3 bounded:2 xx:2 project:1 anh:1 lowest:1 straub:1 sde:1 cm:4 kind:2 developed:3 ghosh:2 mcmcs:16 tackle:1 isometrically:4 xd:2 exactly:5 rm:5 control:1 bio:1 unit:3 omit:2 appear:2 positive:4 local:7 perp:2 tsinghua:5 limit:2 sd:5 aging:1 tends:1 troublesome:1 id:1 subscript:1 becoming:1 china:4 sgrhmc:5 suggests:1 challenging:4 co:2 imbedding:1 practical:1 unique:3 acknowledgment:1 testing:1 implement:1 definite:3 differs:1 empirical:12 projection:1 word:1 get:5 cannot:2 convenience:2 onto:1 undesirable:1 collapsed:1 instability:1 yee:2 restriction:1 equivalent:1 center:1 dz:2 go:3 emily:2 formulate:1 unify:1 splitting:2 fang:1 enabled:1 dw:2 embedding:7 coordinate:22 updated:1 annals:1 target:10 pioneered:1 us:4 element:1 velocity:3 expensive:1 invented:1 preprint:3 ding:3 solved:3 wang:1 news:3 mentioned:1 benjamin:1 pd:1 nash:1 und:1 dt2:3 dynamic:55 geodesic:22 babak:1 raise:1 rewrite:1 technically:1 calderhead:1 patterson:1 efficiency:3 uh:3 joint:3 mh:1 represented:1 derivation:1 effective:1 describe:4 monte:10 artificial:3 tell:1 hyper:2 outcome:1 whose:3 stanford:2 solve:3 larger:1 kai:1 drawing:1 s:1 ability:1 statistic:5 noisy:2 final:2 advantage:5 propose:3 korattikara:1 description:1 kv:1 greenhouse:1 scalability:4 recipe:7 convergence:2 requirement:1 produce:1 generating:2 converges:2 tianqi:2 ben:1 help:1 illustrate:1 develop:6 completion:2 andrew:1 yiming:1 ij:2 strong:1 solves:1 implemented:1 c:1 homeomorphism:1 indicate:1 girolami:2 direction:2 correct:2 stochastic:38 exploration:1 everything:1 require:3 hoover:2 preliminary:1 investigation:1 wall:3 ryan:1 extension:1 hold:4 considered:1 eduard:1 sgld:3 exp:11 great:1 mapping:4 lawrence:2 claim:1 matthew:1 achieves:1 adopt:2 conceiving:2 estimation:4 applicable:3 correctness:1 tf:1 weighted:1 hoffman:1 clearly:1 gaussian:3 gopal:1 aim:1 modified:1 avoid:2 pn:1 zhou:1 varying:1 release:1 likelihood:3 check:1 tech:2 cg:1 baseline:3 sense:2 kim:1 dim:2 inference:14 neven:1 relation:1 expand:1 ralph:1 issue:1 overall:1 aforementioned:1 among:1 exponent:2 constrained:2 special:1 fairly:1 marginal:1 equal:1 construct:2 once:1 field:2 ng:1 sampling:22 identical:1 broad:1 icml:5 carin:2 simplex:4 t2:2 others:1 randomly:3 national:2 intell:1 intended:1 lebesgue:1 suit:1 attempt:1 highly:1 circular:1 chong:1 mixture:6 cmg:1 held:1 chain:3 accurate:1 capable:1 jumping:1 fox:2 euclidean:5 rotating:1 desired:2 circle:2 joydeep:1 rao:1 cover:1 applicability:1 subset:7 uniform:1 dij:1 too:1 sv:1 dir:3 synthetic:6 swiftly:1 referring:1 st:7 density:6 international:8 physic:2 unscalable:1 michael:1 intersecting:1 augmentation:1 central:1 again:2 von:5 dtest:5 aaai:2 slowly:1 worse:1 cb329403:1 tam:1 derivative:1 leading:1 linh:1 toy:2 li:1 potential:4 exclude:1 sec:1 summarized:1 notable:1 vi:7 try:1 shahbaba:1 lab:2 jason:1 undesirably:1 tab:1 carlos:1 simon:1 square:1 ir:1 accuracy:2 wiener:1 variance:1 identify:1 reisinger:1 directional:1 bayesian:14 produced:1 basically:2 none:1 carlo:10 trajectory:1 comp:1 corruption:2 mooney:1 randomness:1 suffers:1 infinitesimal:1 energy:7 james:1 obvious:2 naturally:3 mi:5 recovers:2 riemannian:2 dataset:10 treatment:2 massachusetts:1 knowledge:1 tiwari:1 actually:1 appears:1 higher:4 dt:28 isometric:4 cummings:1 methodology:1 done:1 just:1 correlation:1 eqn:11 hastings:3 replacing:1 banerjee:1 defines:1 mode:2 lda:2 scientific:1 validity:2 verify:1 concept:1 unbiased:2 byrne:1 hausdorff:3 evolution:3 analytically:3 multiplier:2 true:10 symmetric:6 dhillon:1 regularization:1 neal:1 sin:2 kaushik:1 noted:1 whye:2 pdf:1 complete:4 demonstrate:1 vmf:14 workhorse:1 performs:1 motion:4 dissipation:1 reflection:1 stiefel:4 variational:8 novel:4 arindam:1 common:4 wikipedia:3 volume:1 extend:1 cambridge:1 gibbs:1 cv:4 paisley:1 curl:2 rd:1 talent:1 consistency:1 mathematics:1 similarly:1 particle:1 dot:1 scandinavian:1 ahn:1 posterior:12 perplexity:7 claimed:1 suvrit:1 youhan:1 yi:1 scoring:1 conserved:1 guestrin:1 jerrold:2 relaxed:1 freely:1 parallelized:1 converge:1 nate:1 bessel:1 full:6 infer:1 smooth:1 technical:1 usability:1 adapt:1 youth:1 sphere:1 dept:2 arne:1 e1:3 y:1 marsden:2 scalable:8 variant:3 involving:2 basic:2 metric:4 expectation:1 arxiv:6 iteration:7 adopting:3 oren:1 eter:1 want:1 semiparametric:1 appropriately:1 eliminates:1 posse:2 strict:1 flow:10 effectiveness:1 jordan:1 ngo:1 yang:3 noting:3 silverthorn:1 split:4 embeddings:1 easy:1 iii:1 newsgroups:1 fit:1 topology:1 inner:6 idea:2 cn:2 reduce:1 qj:2 whether:1 utility:1 notch:1 passed:1 song:2 peter:1 action:1 repeatedly:1 deep:2 matlab:1 generally:2 saliently:1 covered:1 involve:1 informally:1 listed:1 detailed:1 category:1 http:1 outperform:2 exist:1 zj:1 canonical:2 problematic:1 nsf:1 estimated:3 conceived:2 gmc:27 bryan:1 affected:1 express:2 key:3 lan:1 drawn:2 sgnht:4 diffusion:3 v1:14 ram:1 beijing:2 run:1 angle:3 injected:1 raquel:1 almost:1 family:1 draw:6 summarizes:1 appendix:10 comparable:1 ct:8 guaranteed:3 datum:1 tackled:1 nan:2 fan:1 badly:1 constraint:7 idf:1 deficiency:1 simulate:5 friction:4 speed:2 department:1 developing:2 coordi:1 conjugate:1 describes:1 smaller:1 rmhmc:5 sam:16 appealing:2 metropolis:3 joseph:1 s1:3 hartmut:1 intuitively:1 taken:1 equation:1 visualization:1 remains:1 skew:2 eventually:1 fail:1 know:1 mance:1 sghmc:6 apply:5 hierarchical:2 pdt:5 appropriate:1 v2:13 spectral:1 simulating:1 r2m:2 batch:11 alternative:1 slower:1 existence:1 original:1 denotes:2 dirichlet:4 ensure:1 sgrld:5 subsampling:2 top:1 publishing:1 marginalized:3 clustering:3 build:1 overflow:1 society:1 tensor:4 move:1 already:1 strategy:1 concentration:1 usual:1 unclear:1 exhibit:2 gradient:35 kth:1 dp:9 link:1 sci:1 vd:12 sensible:1 topic:13 mail:1 manifold:41 trivial:2 urtasun:1 water:1 marcus:1 code:2 index:1 illustration:2 reformulate:1 mini:1 julian:1 difficult:1 hmc:5 ql:1 setup:1 robert:1 blockwise:1 negative:1 implementation:1 proper:1 unknown:1 perform:2 teh:2 upper:1 markov:4 datasets:4 mate:1 truncated:1 langevin:6 extended:1 communication:1 rn:10 brubaker:1 arbitrary:1 chunyuan:1 david:2 required:3 undistinguishable:1 barcelona:1 nip:1 alternately:3 address:1 usually:3 pattern:2 below:1 reading:1 challenge:2 program:4 including:3 max:2 royal:1 hyperspheres:4 suitable:2 meanfield:1 treated:1 force:2 solvable:2 zhu:3 scheme:3 technology:1 mathieu:1 admixture:4 hm:1 jun:3 embodied:2 sn:1 raymond:1 text:2 review:1 sg:11 prior:2 tangent:4 chao:1 embedded:20 fully:1 rationale:1 allocation:2 foundation:1 freifeld:1 dq:9 corrupting:1 corrupt:1 dissipating:1 cd:2 austin:1 compatible:1 summary:1 supported:2 bias:1 expm:2 benefit:2 van:1 boundary:2 dimension:2 skeel:1 vocabulary:1 transition:2 stand:1 ssi:6 valid:1 curve:1 author:1 adopts:1 adaptive:3 world:1 sungjin:1 nguyen:2 welling:3 transaction:1 approximate:1 observable:1 implicitly:2 ml:1 global:11 approxi:1 reveals:1 rid:1 corpus:1 handbook:1 xi:4 alternatively:1 continuous:1 latent:4 table:1 additionally:1 jz:1 learn:1 ca:1 bypassed:1 sra:1 alg:1 expansion:1 du:1 elegantly:1 aistats:2 pk:1 main:1 abraham:1 whole:2 noise:5 augmented:1 fig:8 aid:1 sub:1 momentum:5 originated:1 explicit:2 position:2 lie:2 mardia:1 theorem:1 r2:1 dk:1 normalizing:1 thermostat:8 exists:1 incorporating:1 intractable:2 intrinsic:1 babbush:1 chen:6 gap:2 depicted:1 explore:1 expressed:5 kxk:1 scalar:2 inderjit:1 bo:2 chang:3 radford:1 chance:1 dcszj:1 ma:4 kinetic:1 identity:2 diffeomorphism:1 eventual:1 fisher:9 absence:1 hard:1 openmp:1 sampler:5 averaging:2 indicating:1 formally:1 perfor:1 support:1 mark:2 arises:1 stressed:1 anoop:1 incorporate:1 evaluate:1 mcmc:11 |
5,839 | 6,283 | Deconvolving Feedback Loops
in Recommender Systems
Ayan Sinha
Purdue University
[email protected]
David F. Gleich
Purdue University
[email protected]
Karthik Ramani
Purdue University
[email protected]
Abstract
Collaborative filtering is a popular technique to infer users? preferences on new
content based on the collective information of all users preferences. Recommender
systems then use this information to make personalized suggestions to users. When
users accept these recommendations it creates a feedback loop in the recommender
system, and these loops iteratively influence the collaborative filtering algorithm?s
predictions over time. We investigate whether it is possible to identify items
affected by these feedback loops. We state sufficient assumptions to deconvolve
the feedback loops while keeping the inverse solution tractable. We furthermore
develop a metric to unravel the recommender system?s influence on the entire
user-item rating matrix. We use this metric on synthetic and real-world datasets
to (1) identify the extent to which the recommender system affects the final rating
matrix, (2) rank frequently recommended items, and (3) distinguish whether a
user?s rated item was recommended or an intrinsic preference. Our results indicate
that it is possible to recover the ratings matrix of intrinsic user preferences using a
single snapshot of the ratings matrix without any temporal information.
1
Introduction
Recommender systems have been helpful to users for making decisions in diverse domains such
as movies, wines, food, news among others [19, 23]. However, it is well known that the interface
of these systems affect the users? opinion, and hence, their ratings of items [7, 24].Thus, broadly
speaking, a user?s rating of an item is either his or her intrinsic preference or the influence of the
recommender system (RS) on the user [2]. As these ratings implicitly affect recommendations to other
users through feedback, it is critical to quantify the role of feedback in content personalization [22].
Thus the primary motivating question for this paper is: Given only a user-item rating matrix, is it
possible to infer whether any preference values are influenced by a RS? Secondary questions include:
Which preference values are influenced and to what extent by the RS? Furthermore, how do we
recover the true preference value of an item to a user?
We develop an algorithm to answer these questions using the singular value decomposition (SVD)
of the observed ratings matrix (Section 2). The genesis of this algorithm follows by viewing the
observed ratings at any point of time as union of true ratings and recommendations:
Robs = Rtrue + Rrecom
(1)
where Robs is the observed rating matrix at a given instant of time, Rtrue is the rating matrix due
to users? true preferences of items (along with any external influences such as ads, friends, and so
on) and Rrecom is the rating matrix which indicates the RS?s contribution to the observed ratings.
Our more formal goal is to recover Rtrue from Robs . But this is impossible without strong modeling
assumptions; any rating is just as likely to be a true rating as due to the system.
Thus, we make strong, but plausible assumptions about a RS. In essence, these assumptions prescribe
a precise model of the recommender and prevent its effects from completely dominating the future.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
With these assumptions, we are able to mathematically relate Rtrue and Robs . This enables us to
find the centered rating matrix Rtrue (up to scaling). We caution readers that these assumptions
are designed to create a model that we can tractably analyze, and they should not be considered
limitations of our ideas. Indeed, the strength of this simplistic model is that we can use its insights
and predictions to analyze far more complex real-world data. One example of this model is that
the notion of Rtrue is a convenient fiction that represents some idealized, unperturbed version of the
ratings matrix. Our model and theory suggests that Rtrue ought to have some relationship with the
observed ratings, Robs . By studying these relationships, we will show that we gain useful insights
into the strength of various feedback and recommendation processes in real-data.
In that light, we use our theory to develop a heuristic, but accurate, metric to quantitatively infer the
influence of a RS (or any set of feedback effects) on a ratings matrix (Section 3). Additionally, we
propose a metric for evaluating the influence of a recommender system on each user-item rating pair.
Aggregating these scores over all users helps identify putative highly recommended items. The final
metrics for a RS provide insight into the quality of recommendations and argue that Netflix had a
better recommender than MovieLens, for example. This score is also sensitive to all cases where we
have ground-truth knowledge about feedback processes akin to recommenders in the data.
2
Deconvolving feedback
We first state equations ans assumptions under which the true rating matrix is recoverable (or
deconvolvable) from the observed matrix, and provide an algorithm to deconvolve using the SVD.
2.1
A model recommender system
Consider a ratings matrix R of dimension m ? n where m is the number of
users and n is the number of items being
rated. Users are denoted by subscript u,
and items are denoted by subscript i, i.e.,
Ru,i denotes user u?s rating for item i. As
stated after equation (1), our objective
is to decouple Rtrue from Rrecom given
the matrix Robs . Although this problem
seems intractable, we list a series of as- Figure 1: Subfigure A shows a ratings matrix with recsumptions under which a closed form ommender induced ratings and true ratings; Figure B:
solution of Rtrue is deconvolvable from Feedback loop in RS wherein the observed ratings is a
function of the true ratings and ratings induced by a RS
Robs alone.
Assumption 1 The feedback in the RS occurs through the iterative process involving the observed
ratings and an item-item similarity matrix S: 1
Robs = Rtrue + H (Robs S).
(2)
Here indicates Hadamard, or entrywise product, given as: (H R)u,i = Hu,i ? Ru,i . This assumption
is justified because in many collaborative filtering techniques, Rrecom is a function of the observed
ratings Robs and the item-item similarity matrix, S . The matrix H is an indicator matrix over a set
of items where the user followed the recommendation and agreed with it. This matrix is essentially
completely unknown and is essentially unknowable without direct human interviews. The model RS
equation (2) then iteratively updates Robs based on commonly rated items by users. This key idea is
illustrated in Figure 1. The recursion progressively fills all missing entries in matrix Robs starting
from Rtrue . The recursions do not update Rtrue in our model of a RS. If we were to explicitly consider
the state of matrix Robs after k iterations, Rk+1
obs we get:
k+1
(k)
k
k?1
Robs = Rtrue + H (Robs Sk ) = Rtrue + H(k) Rtrue + H(k?1) (Robs
Sk?1 ) Sk = . . . (3)
Here Sk is the item-item similarity matrix induced by the observed matrix at state k. The above
equation 3 is naturally initialized as R1obs = Rtrue along with the constraint S1 = Strue , i.e, the similarity
1
RTobs
? the derivations in this paper can be extended by considering the expression:
For an user-user similarities, S,
T
T ?
T
= Rtrue + H (Robs S). We restrict to item-item similarity which is more popular in practice.
2
matrix at the first iteration is the similarity matrix induced by the matrix of true preferences, Rtrue .
Thus, we see that Robs is an implicit function of Rtrue and the set of similarity matrices Sk , Sk?1 , . . . S1 .
Assumption 2 Hadamard product H(k) is approximated with a probability parameter ?k ? (0, 1].
We model the selection matrix H(k) and it?s Hadamard problem in expectation and replace the
successive matrices H(k) with independent Bernoulli random matrices with probability ?k . Taking
the expectation allows us to replace the matrix H(k) with the probability parameter ?k itself:
k
k?1
Rk+1
(4)
obs = Rtrue + ?k (Robs Sk ) = Rtrue + ?k Rtrue + ?k?1 (Robs Sk?1 ) Sk = . . .
The set of Sk , Sk?1 , ? ? ? are apriori unknown. We are now faced with the task of constructing a valid
similarity metric. Towards this end, we make our next assumption.
? u in the observed and true matrix are roughly equal: R
? (obs)
? (true)
Assumption 3 The user mean R
?R
.
u
u
The Euclidean item norms kRi k are also roughly equal: kR(obs)
k ? kR(true)
k.
i
i
These assumptions are justified because ultimately we are interested in relative preferences of items
for a user and unbiased relative ratings of items by users. These can be achieved by centering
users and the normalizing item ratings, respectively, in the true and observed ratings matrices. We
quantitatively investigate this assumption in the supplementary material. Using this assumption, the
similarity metric then becomes:
P
?
?
u?U (Ru,i ? Ru )(Ru, j ? Ru )
(5)
S(i, j) = q
q
P
? 2 Pu?U (Ru, j ? R
? u )2
u?U (Ru,i ? Ru )
This metric is known as the adjusted cosine similarity, and preferred over cosine similarity because it
? u , and, R
? u,i =
? u,i = Ru,i ? R
mitigates the effect of rating schemes over users [25]. Using the relations R
? u,i
?u
R
Ru,i ? R
, the expression of our recommender (4) becomes:
? k = ?P
kR
? 2
u?U (Ru,i ? Ru )
i
? true )3 + . . .)
? true )2 + f3 (a3 )( R
? Ttrue R
? true + f2 (a2 )( R
? Ttrue R
? obs = R
? true (I + f1 (a1 ) R
? Ttrue R
R
(6)
Here, f1 , f2 , f3 . . . are functions of thePprobability parameters ak = [?1 , ?2 , . . . ?k , . . .] of the form
fz (az ) = c?c11 ?c12 . . . ?ckk . . . such that k ck = z, and c is a constant. The proof of equation 6 is
? obs being
in the supplementary material. We see that the centering and normalization results in R
?
? true , but
explicitly represented in terms of Rtrue and coefficients f (a). It is now possible to recover R
the coefficients f (a) are apriori unknown. Thus, our next assumption.
Assumption 4 fz (az ) = ?z , i.e., the coefficients of the series (6) are induced by powers of a constant
probability parameter ? ? (0, 1].
Note that in recommender (3), Robs becomes denser with every iteration, and hence the higher order
Hadamard products in the series fill fewer missing terms. The effect of absorbing the unknowable
probability parameters, ?k ?s into single probability parameter ? is similar. Powers of ?, produce
successively less of an impact, just as in the true model. The governing expression now becomes:
? obs = R
? true (I + ? R
? Ttrue R
? true + ?2 ( R
? Ttrue R
? true )2 + ?3 ( R
? Ttrue R
? true )3 + . . .)
R
(7)
In order to ensure convergence of this equation, we make our final assumption.
T
? true R
? true is less than 1.
Assumption 5 The spectral radius of the similarity matrix ? R
? obs , R
? true (I + ? R
? Ttrue R
? true +
This assumption enables us to write the infinite series representing R
T
T
T
2 ?
2
3 ?
3
?1
?
?
?
?
? ( Rtrue Rtrue ) + ? ( Rtrue Rtrue ) + . . .) as (1 ? ? Rtrue Rtrue ) . It states that given ?, we scale the matrix
? Ttrue R
? true such that the spectral radius of ? R
? Ttrue R
? true is less than 1 2 . Then we are then able to recover
R
T
? true up to a scaling constant.
R
Discussion of assumptions. We now briefly discuss the implications of our assumptions. First,
assumption 1 states the recommender model. Assumption 2 states that we are modeling expected
2
See [10] for details on scaling similarity matrices to ensure convergence
3
Figure 2: (a) to (f): Our procedure for scoring ratings based on the deconvolved scores with true
initial ratings in cyan and ratings due to recommender in red. (a) The observed and deconvolved
ratings. (b) The RANSAC fit to extract straight line passing through data points for each item. (c)
Rotation and translation of data points using fitted line such that the scatter plot is approximately
parallel to y-axis and recommender effects are distinguishable along x-axis. (d) Scaling of data points
used for subsequent score assignment. (e) Score assignment using the vertex of the hyperbola with
slope ? = 1 that passes through the data point. (f) Increasing ? deconvolves implicit feedback loops
to a greater extent and better discriminates recommender effects as illustrated by the red points which
show more pronounced deviation when ? = 1.
behavior rather than actual behavior. Assumptions 3-5 are key to our method working. They
essentially state that the RS?s effects are limited in scope so that they cannot dominate the world.
This has a few interpretations on real-world data. The first would be that we are considering the
impact of the RS over a short time span. The second would be that the recommender effects are
essentially second-order and that there is some other true effect which dominates them. We discuss
the mechanism of solving equation 7 using the above set of five assumptions next.
2.2
The algorithm for deconvolving feedback loops
Theorem 1 Assuming the RS follows (7), ? is between 0 and 1, and the singular value decomposition
? obs = U?obs V T , the deconvolved matrix Rtrue of true ratings is
of the observed rating matrix is, R
T
given as U?true V , where the ?true is a diagonal matrix with elements:
?true
i
?1
=
+
2??obs
i
s
1
2
4?2 (?obs
i )
+
1
?
(8)
The proof of the theorem is in the supplementary material. In practical applications, the feedback
loops are deconvolved by taking a truncated-SVD (low rank approximation) instead of the complete
decomposition. In this process, we naturally concede accuracy for performance. We consider the
matrix of singular values ?? obs to only contain the k largest singular values (the other singular values
are replaced by zero). We now state Algorithm 1 for deconvolving feedback loops. The algorithm is
simple to compute as it just involves a singular value decomposition of the observed ratings matrix.
3
Results and recommender system scoring
We tested our approach for deconvolving feedback loops on synthetic RS, and designed a metric to
identify the ratings most affected by the RS. We then use the same automated technique to study
real-world ratings data, and find that the metric is able to identify items influenced by a RS.
4
Algorithm 1 Deconvolving Feedback Loops
Input: Robs , ?, k, where Robs is observed ratings matrix, ? is parameter governing feedback loops
and k is number of singular values
? true , True rating matrix
Output: R
1: Compute R? obs given Robs , where R? obs is user centered observed matrix
?1
?
2: Compute R? obs ? R? obs D?1
N , where Robs is item-normalized rating matrix, and DN is diagonal matrix of item-norms
DN (i, i) =
q
P
? u )2
?R
3: Solve U?obs V ? S V D( R? obs
r, k), the truncated SVD corresponding to k largest singular values.
1
?1
1
true
4: Perform ?i ?
obs +
2 obs 2 + ? for all i
u?U (Ru,i
T
2??i
4? (?i
)
5: return U, ?true , V T
Figure 3: Results for a synthetic RS with controllable effects. (Left to right): (a) ROC curves by
varying data sparsity (b) ROC curves by varying the parameter ? (c) ROC curves by varying feedback
exponent (d) Score assessing the overall recommendation effects as we vary the true effect.
3.1
Synthetic data simulating a real-world recommender system
We use item response theory to generate a sparse true rating matrix Rtrue using a model related to that
in [12]. Let au be the center of user u?s rating scale, and bu be the rating sensitivity of user u. Let ti
be the intrinsic score of item i. We generate a user-item rating matrix as:
Ru,i = L[au + bu ti + ?u,i ]
(9)
where L[?] is the discrete levels function assigning a score in the range 1 to 5: L[?] =
max(min(round(?), 5), 1) and ?u,i is a noise parameter. In our experiment, we draw au ? N(3, 1),
bu ? N(0.5, 0.5), tu ? N(0.1, 1), and ?u,i ? N(0, 1), where N is a standard normal, and is a noise
parameter. We sample these ratings uniformly at random by specifying a desired level of rating
sparsity ? which serves as the input, Rtrue , to our RS. We then run a cosine similarity based RS,
progressively increasing the density of the rating matrix. The unknown ratings arePiteratively updated
using the standard item-item collaborative filtering technique [8] as Rk+1
u,i =
k
k
j?i (si, j Ru, j )
k
j?i (|si, j |)
P
, where k
is the iteration
number and R = Rtrue , and the similarity measure at the k iteration is given as
P
0
ski, j = ?P
u?U
Rku,i Rku, j
qP
k 2
u?U (Ru,i )
u?U
(Rku, j )2
th
. After the kth iteration, each synthetic user accepts the top r recommen-
e
dations with probability proportional to (Rk+1
u,i ) , where e is an exponent controlling the frequency
of acceptance. We fix the number of iterative updates to be 10, r to be 10 and the resulting rating
? true . Recall, R
? true is user-centered
matrix is Robs . We deconvolve Robs as per Algorithm 1 to output R
? true
and item-normalized. In the absence of any recommender effects Rrecom , the expectation is that R
? obs . The absence of a linear correlation hints at factors extraneous to
is perfectly correlated with R
? true (the deconvolved ratings) against the R
? obs , and
the user, i.e., the recommender. Thus, we plot R
search for characteristic signals that exemplify recommender effects (see Figure 2a and inset).
3.2
A metric to assess a recommender system
We develop an algorithm guided by the intuition that deviation of ratings from a straight line suggest
recommender effects (Algorithm 2). The procedure is visually elucidated in Figure 2. We consider
fitting a line to the observed and deconvolved (equivalently estimated true) ratings; however, our
experiments indicate that least square fit of a straight line in the presence of severe recommender
effects is not robust. The outliers in our formulation correspond to recommended items. Hence, we
use random sample consensus or the RANSAC method [11] to fit a straight line on a per item basis
5
Table 1: Datasets and parameters
Dataset
Users
Items
Min RPI
Rating
k in SVD
Score
Jester-1
Jester-2
24.9K
50.6K
100
140
1
1
615K
1.72M
100
140
0.0487
0.0389
MusicLab-Weak
MusicLab-Strong
7149
7192
48
48
1
1
25064
23386
48
48
0.1073
0.1509
MovieLens-100K
MovieLens-1M
MovieLens-10M
943
6.04K
69.8K
603
2514
7259
50
50
50
83.2K
975K
9.90M
603
2514
1500
0.2834
0.3033
0.3821
BeerAdvocate
RateBeer
Fine Foods
Wine Ratings
Netflix
31.8K
28.0K
130K
21.0K
480K
9146
20129
5015
8772
16795
20
20
20
20
100
1.35M
2.40M
329K
320K
100M
1500
1500
1500
1500
1500
0.2223
0.1526
0.1209
0.1601
0.2661
(Figure 2b). All these straight lines are translated and rotated so as to coincide with the y-axis as
displayed in Figure 2c. Observe that the data points corresponding to recommended ratings pop out
as a bump along the x-axis. Thus, the effect of the RANSAC and rotation is to place the ratings into
a precise location. Next, the ratings are scaled so as to make the maximum absolute values of the
? true , R
? obs , values to be equal (Figure 2d).
rotated and translated R
The scores we design are to measure ?extent? into the x-axis. But we want to consider some allowable
vertical displacement. The final score we assign is given by fitting a hyperbola through each rating
? true , R
? obs . A straight line of slope, ? = 1 passing through the origin is fixed as an
viewed as a point: R
asymptote to all hyperbolas. The vertex of this hyperbola serves as the score of the corresponding
data point. The higher the value of the vertex of the associated hyperbola to a data point, the more
likely is the data point to be recommended item. Using the relationship between slope of asymptote,
? true , R
? obs ) is given by:
and vertex of hyperbola, the score s( R
q
? 2obs )
? true , R
? obs ) = real( R
? 2true ? R
(10)
s( R
? true , R
? obs are equal
We set the slope of the asymptote, ? = 1, because the maximum magnitudes of R
(see Figure 2 d,e). The overall algorithm is stated in the supplementary material. Scores are zero if
the point is inside the hyperbola with vertex 0.
3.3
Identifying high recommender effects in the synthetic system
We display the ROC curve of our algorithm to identify recommended products in our synthetic
simulation by varying the sparsity, ? in Rtrue (Figure 3a), varying ? (Figure 3b), and varying exponent
e (Figure 3c) for acceptance probability. The dimensions of the rating matrix is fixed at [1000, 100]
with 1000 users and 100 items. Decreasing ? as well as ? has adversarial effects on the ROC curve,
and hence, AUC values, as is natural. The fact that high values of ? produce more discriminative
deconvolved ratings is clearly illustrated in Figure 2 f. Additionally, Figure 3 d shows that the
calculated score varies linearly with the true score as we change the recommender exponent, e, color
coded in the legend. Overall, our algorithm is remarkably successful in extracting recommended
items from Robs without any additional information. Also, we can score the overall impact of the RS
(see the upcoming section RS scores) and it accurately tracks the true effect of the RS.
3.4
Real data
In this subsection we validate our approach for deconvolving feedback loops on a real-world RS.
First, we demonstrate that the deconvolved ratings are able to distinguish datasets that use a RS
against those that do not. Second, we specify a metric that reflects the extent of RS effects on the
final ratings matrix. Finally, we validate that the score returned by our algorithm is indicative of the
recommender effects on a per item basis. We use ? = 1 in all experiments because it models the case
when the recommender effects are strong and thus produces the highest discriminative effect between
the observed and true ratings (see Figure 2 f). This is likely to be the most useful as our model is only
an approximation.
6
Figure 4: (Left to Right) A density plot of deconvolved and observed ratings on the Jester joke dataset
(Left) that had no feedback loops and on the Netflix dataset (Left Center) where their Cinematch
algorithm was running. The Netflix data shows dispersive effects indicative of a RS whereas the
Jester data is highly correlated indicating no feedback system. A scatter plot of deconvolved and
observed ratings on the MusicLab dataset- Weak (Right Center) that had no downloads counts and on
the MusicLab dataset- Strong (Right) which displayed the download counts. The MusicLab-Strong
scatter plot shows higher dispersive effects indicative of feedback effects.
Datasets. Table 1 lists all the datasets we use to validate our approach for deconvolving a RS
(from [21, 4, 13]). The columns detail name of the dataset, number of users, the number of items,
the lower threshold for number of ratings per item (RPI) considered in the input ratings matrix and
the number of singular vectors k (as many as possible based on the limits of computer memory),
respectively. The datasets are briefly discussed in the supplementary material.
Classification of ratings matrix.
An example of the types of insights our method enables is shown in Figure 4. This figure shows four
density plots of the estimated true ratings (y-axis) compared with the observed ratings (x-axis) for
two datasets, Jester and Netflix. Higher density is indicated by darker shades in the scatter plot of
observed and deconvolved ratings. If there is no RS, then these should be highly correlated. If there
is a system with feedback loops, we should see a dispersive plot. In the first plot (Jester) we see the
results for a real-world system without any RS or feedback loops; the second plot (Netflix) shows the
results on the Netflix ratings matrix, which did have a RS impacting the data. A similar phenomenon
is observed in the third and fourth plots corresponding to the MusicLab dataset in Figure 4. We
display the density plot of observed (y-axis) vs. deconvolved or expected true (x-axis) ratings for all
datasets considered in our evaluation in the supplementary material.
Recommender system scores. The RS scores we displayed
in Table 1 are based on the fraction of ratings with non-zero
score (using the score metric (10)). Recall that a zero score
indicates that the data point lies outside the associated hyperbola and does not suffer from recommender effect. Hence,
the RS score is indicative of the fraction of ratings affected
by the recommender. Looking at Table 1, we see that the two
Jester datasets have low RS scores validating that the Jester
dataset did not run a RS. The MusicLab datasets show a weak
effect because they do not include any type of item-item recommender. Nevertheless, the strong social influence condition
scored higher for a RS because the simple download count
feedback will elicit comparable effects. These cases give us
confidence in our scores because we have a clear understanding of feedback processes in the true data. Interestingly, the
RS score progressively increases for the three versions of the Figure 5: (Top to bottom) (a) DeMovieLens datasets: MovieLens-100K, MovieLens-1M and convolved ranking as a bar chart for
MovieLens-10M. This is expected as the RS effects would T.V. shows. (b) Deconvolved rankhave progressively accrued over time in these datasets. Note ing as a bar chart for Indian movies.
that Netflix is also lower than Movielens, indicating that Netflix?s recommender likely correlated better with users? true tastes. The RS scores associated with
alcohol datasets (RateBeer, BeerAdvocate and Wine Ratings) are higher compared to the Fine Foods
dataset. This is surprising. We conjecture that this effect is due to common features that correlate
with evaluations of alcohol such as the age of wine or percentage of alcohol in beer.
Ranking of items based on recommendation score. We associate a RS rating to each item as our
mean score of an item over all users. All items are ranked in ascending order of RS score and we
7
first look at items with low RS scores. The Netflix dataset comprises of movies as well as television
shows. We expect that television shows are less likely to be affected by a RS because each season of
a T.V. show requires longer time commitment, and they have their own following. To validate this
expectation, we first identify all T.V. shows in the ranked list and compute the number of occurrences
of a T.V. show in equally spaced bins of size 840. Figure 5 shows a bar chart for the number of
occurrences and we see that there are ? 90 T.V.shows in the first bin (or top 840 items as per the
score). This is highest compared to all bins and the number of occurrences progressively decrease
as we move further down the list, validating our expectation. Also unsurprisingly, the seasons of
the popular sitcom Friends comprised of 10 out of the top 20 T.V. seasons with lowest RS scores.
It is also expected that the Season 1 of a T.V. show is more likely to be recommended relative to
subsequent seasons. We identified the top 40 T.V shows with multiple (at least 2) seasons, and
observed that 31 of these have a higher RS score for Season 1 relative to Season 2. The 9 T.V. shows
where the converse is true are mostly comedies like Coupling, That 70?s Show etc., for which the
seasons can be viewed independently of each other. Next, we looked at items with high RS score. At
the time the dataset was released, Netflix operated exclusively in the U.S., and one plausible use is
that immigrants might use Netflix?s RS to watch movies from their native country. We specifically
looked at Indian films in the ranked list to validate this expectation. Figure 5b shows a bar chart
similar to the one plotted for T.V. shows and we observe an increasing trend along the ranked list for
the number of occurrences of Indian films. The movie with lowest recommendation score is Lagaan,
the only Indian movie to be nominated for the Oscars in last 25 years.
4
Discussion, related work and future work
Discussion:In this paper we propose a mechanism to deconvolve feedback effects on RS, similar in
spirit to the network deconvolution method to distinguish direct dependencies in biological networks
[10, 3]. Indeed, our approach can be viewed as a generalization of their methods for general
rectangular matrices. We do so by only considering a ratings matrix at a given instant of time. Our
approach depends on a few reasonable assumptions that enable us to create a tractable model of a RS.
When we evaluate the resulting methods on synthetic and real-world datasets, we find that we are
able to assess the degree of influence that a RS has had on those ratings. This analysis is also easy to
compute and just involves a singular value decomposition of the ratings matrix.
Related Work: User feedback in collaborative filtering systems is categorized as either explicit
feedback which includes input by users regarding their interest in products [1], or implicit feedback
such as purchase and browsing history, search patterns, etc. [14]. Both types of feedback affect
the item-item or user-user similarities used in the collaborative filtering algorithm for predicting
future recommendations [16]. There has been a considerable amount of work on incorporating the
information from these types of user feedback mechanisms in collaborative filtering algorithms in
order to improve and personalize recommendations [15, 6]. Here, we do not focus on improving
collaborative filtering algorithms for recommender systems by studying user feedback, but instead,
our thrust is to recover each user?s true preference of an item devoid of any rating bias introduced
by the recommender system due to feedback. Another line of work based on user feedback in
recommender systems is related to understanding the exploration and exploitation tradeoff [20]
associated with the training feedback loop in collaborative filtering algorithms [9]. This line of
research evaluates ?what-if? scenarios such as evaluating the performance of alternative collaborative
filtering models or, adapting the algorithm based on user-click feedbacks to maximize reward, using
approaches like the multi-armed bandit setting [17, 18] or counterfactual learning systems [5]. In
contrast, we tackle the problem of recovering the true ratings matrix if feedback loops were absent.
Future Work: In the future we wish to analyze the effect of feeding the derived deconvolved ratings
without putative feedback effects back into the RS. Some derivatives of our method include setting
the parameters considered unknown in our current approach with known values (such as S ) if known
a priori. Incorporating temporal information at different snapshots of time while deconvolving the
feedback loops is also an interesting line of future work. From another viewpoint, our approach
can serve as a supplement to the active learning community to unbias the data and reveal additional
insights regarding feedback loops considered in this paper. Overall, we believe that deconvolving
feedback loops opens new gateways for understanding ratings and recommendations.
Acknowledgements: David Gleich would like to acknowledge the support of the NSF via awards CAREER
CCF-1149756, IIS-1422918, IIS-1546488, and the Center for Science of Information STC, CCF-093937, as
well as the support of DARPA SIMPLEX.
8
References
[1] G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art
and possible extensions. IEEE Trans. on Knowl. and Data Eng., 17(6):734?749, June 2005.
[2] X. Amatriain, J. M. Pujol, N. Tintarev, and N. Oliver. Rate it again: Increasing recommendation accuracy by user
re-rating. In RecSys, pp. 173?180, 2009.
[3] B. Barzel and A.-L. Barab?si. Network link prediction by global silencing of indirect correlations. Nature biotechnology, 31(8):720?725, 2013.
[4] J. Bennett and S. Lanning. The Netflix prize. In Proceedings of the KDD Cup Workshop, pp. 3?6, 2007.
[5] L. Bottou, J. Peters, J. Qui?onero-Candela, D. X. Charles, D. M. Chickering, E. Portugaly, D. Ray, P. Simard, and
E. Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of
Machine Learning Research, 14:3207?3260, 2013.
[6] L. Chen, G. Chen, and F. Wang. Recommender systems based on user reviews: the state of the art. User Modeling and
User-Adapted Interaction, 25(2):99?154, 2015.
[7] D. Cosley, S. K. Lam, I. Albert, J. A. Konstan, and J. Riedl. Is seeing believing?: How recommender system interfaces
affect users? opinions. In CHI, pp. 585?592, 2003.
[8] M. Deshpande and G. Karypis. Item-based top-n recommendation algorithms. ACM Trans. Inf. Syst., 22(1):143?177,
Jan. 2004.
[9] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet advertising and the generalized second-price auction: Selling
billions of dollars worth of keywords. American Economic Review, 97(1):242?259, 2007.
[10] S. Feizi, D. Marbach, M. Medard, and M. Kellis. Network deconvolution as a general method to distinguish direct
dependencies in networks. Nature Biotechnology, 31(8):726?733, July 2013.
[11] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image
analysis and automated cartography. Commun. ACM, 24(6):381?395, June 1981.
[12] D. F. Gleich and L.-H. Lim. Rank aggregation via nuclear norm minimization. In KDD, pp. 60?68, 2011.
[13] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Inf.
Retr., 4(2):133?151, July 2001.
[14] Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, pp. 263?272, 2008.
[15] G. Jawaheer, M. Szomszor, and P. Kostkova. Comparison of implicit and explicit feedback from an online music
recommendation service. In Proceedings of the Workshop on Information Heterogeneity and Fusion in Recommender
Systems, pp. 47?51, 2010.
[16] N. Lathia, S. Hailes, L. Capra, and X. Amatriain. Temporal diversity in recommender systems. In SIGIR, pp. 210?217,
2010.
[17] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In WWW, pp. 661?670, 2010.
[18] W. Li, X. Wang, R. Zhang, Y. Cui, J. Mao, and R. Jin. Exploitation and exploration in a performance based contextual
advertising system. In KDD, pp. 27?36, 2010.
[19] G. Linden, B. Smith, and J. York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet
Computing, 7(1):76?80, Jan. 2003.
[20] J. G. March. Exploration and exploitation in organizational learning. Organiz. Science, 2(1):pp. 71?87, 1991.
[21] J. J. McAuley and J. Leskovec. From amateurs to connoisseurs: Modeling the evolution of user expertise through
online reviews. In WWW, pp. 897?908, 2013.
[22] R. S. Poston and C. Speier. Effective use of knowledge management systems: A process model of content ratings and
credibility indicators. MIS Quarterly, 29(2):pp. 221?244, 2005.
[23] F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor. Recommender Systems Handbook. Springer-Verlag, New York,
2010.
[24] M. J. Salganik, P. S. Dodds, and D. J. Watts. Experimental study of inequality and unpredictability in an artificial
cultural market. Science, 311(5762):854?856, 2006.
[25] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In
WWW, pp. 285?295, 2001.
9
| 6283 |@word exploitation:3 briefly:2 version:2 adomavicius:1 seems:1 norm:3 open:1 hu:2 r:52 simulation:1 decomposition:5 eng:1 mcauley:1 initial:1 series:4 score:38 exclusively:1 interestingly:1 current:1 contextual:2 com:1 surprising:1 si:3 scatter:4 assigning:1 rpi:2 chu:1 subsequent:2 thrust:1 kdd:3 enables:3 asymptote:3 designed:2 plot:12 update:3 progressively:5 v:1 alone:1 fewer:1 item:63 indicative:4 rku:3 prize:1 short:1 smith:1 location:1 preference:12 successive:1 zhang:1 five:1 along:5 dn:2 direct:3 edelman:1 fitting:3 ray:1 inside:1 expected:4 market:1 behavior:2 indeed:2 frequently:1 roughly:2 multi:1 chi:1 decreasing:1 food:3 actual:1 armed:1 considering:3 increasing:4 becomes:4 spain:1 cultural:1 lowest:2 what:2 caution:1 ought:1 temporal:3 every:1 ti:2 tackle:1 unknowable:2 scaled:1 ostrovsky:1 converse:1 service:1 aggregating:1 limit:1 ak:1 subscript:2 approximately:1 might:1 downloads:1 au:3 suggests:1 specifying:1 limited:1 range:1 karypis:2 practical:1 union:1 practice:1 procedure:2 displacement:1 jan:2 elicit:1 adapting:1 convenient:1 confidence:1 seeing:1 shapira:1 suggest:1 get:1 cannot:1 selection:1 deconvolve:4 bennett:1 influence:8 impossible:1 www:3 missing:2 center:4 starting:1 independently:1 unravel:1 rectangular:1 survey:1 sigir:1 amazon:1 identifying:1 insight:5 fill:2 dominate:1 nuclear:1 his:1 notion:1 updated:1 controlling:1 user:56 goldberg:1 prescribe:1 origin:1 associate:1 element:1 trend:1 approximated:1 native:1 observed:26 role:1 bottom:1 wang:2 news:2 decrease:1 highest:2 discriminates:1 intuition:1 reward:1 fischler:1 ultimately:1 solving:1 serve:1 creates:1 f2:2 completely:2 basis:2 translated:2 selling:1 darpa:1 indirect:1 various:1 represented:1 derivation:1 effective:1 artificial:1 outside:1 heuristic:1 supplementary:6 plausible:2 dominating:1 denser:1 solve:1 film:2 itself:1 final:5 online:2 ttrue:9 interview:1 propose:2 lam:1 interaction:1 product:5 commitment:1 tu:1 loop:22 hadamard:4 thepprobability:1 pronounced:1 validate:5 az:2 billion:1 convergence:2 assessing:1 produce:3 unpredictability:1 rotated:2 help:1 coupling:1 develop:4 friend:2 keywords:1 ckk:1 strong:7 recovering:1 involves:2 indicate:2 quantify:1 guided:1 radius:2 centered:3 human:1 exploration:3 viewing:1 enable:1 opinion:2 material:6 bin:3 feeding:1 assign:1 ricci:1 f1:2 fix:1 generalization:1 biological:1 mathematically:1 adjusted:1 extension:1 considered:5 ground:1 normal:1 visually:1 scope:1 bump:1 vary:1 a2:1 released:1 wine:4 knowl:1 sensitive:1 schwarz:1 largest:2 create:2 reflects:1 minimization:1 mit:1 clearly:1 silencing:1 lanning:1 ck:1 rather:1 season:9 varying:6 derived:1 focus:1 june:2 rank:3 indicates:3 bernoulli:1 believing:1 cartography:1 contrast:1 adversarial:1 dollar:1 helpful:1 roeder:1 capra:1 entire:1 accept:1 her:1 relation:1 bandit:2 interested:1 overall:5 among:1 classification:1 denoted:2 exponent:4 extraneous:1 jester:8 impacting:1 priori:1 art:2 apriori:2 equal:4 f3:2 represents:1 look:1 deconvolving:10 dodds:1 future:6 purchase:1 others:1 simplex:1 quantitatively:2 hint:1 few:2 tuzhilin:1 replaced:1 karthik:1 acceptance:2 interest:1 investigate:2 highly:3 dgleich:1 evaluation:2 severe:1 personalization:1 light:1 operated:1 rokach:1 implication:1 accurate:1 oliver:1 amateur:1 euclidean:1 initialized:1 desired:1 plotted:1 re:1 subfigure:1 deconvolved:14 sinha:1 fitted:1 leskovec:1 column:1 modeling:4 assignment:2 portugaly:1 organizational:1 vertex:5 entry:1 deviation:2 comprised:1 successful:1 motivating:1 dependency:2 answer:1 varies:1 synthetic:8 density:5 accrued:1 sensitivity:1 devoid:1 bu:3 again:1 successively:1 management:1 external:1 american:1 derivative:1 simard:1 return:1 li:2 syst:1 diversity:1 c12:1 includes:1 coefficient:3 explicitly:2 ranking:2 ad:1 idealized:1 depends:1 closed:1 candela:1 analyze:3 red:2 netflix:13 recover:6 aggregation:1 parallel:1 slope:4 collaborative:14 contribution:1 ass:2 square:1 accuracy:2 chart:4 characteristic:1 correspond:1 identify:7 spaced:1 hyperbola:8 weak:3 accurately:1 onero:1 advertising:3 worth:1 expertise:1 straight:6 history:1 influenced:3 centering:2 against:2 evaluates:1 volinsky:1 frequency:1 pp:13 deshpande:1 naturally:2 proof:2 associated:4 mi:1 gain:1 dataset:11 popular:3 counterfactual:2 recall:2 lim:1 knowledge:2 exemplify:1 color:1 ramani:2 gleich:3 agreed:1 subsection:1 dispersive:3 back:1 higher:7 wherein:1 response:1 specify:1 entrywise:1 formulation:1 furthermore:2 just:4 implicit:5 governing:2 correlation:2 langford:1 working:1 quality:1 indicated:1 reveal:1 believe:1 name:1 effect:35 contain:1 true:62 unbiased:1 normalized:2 ccf:2 hence:5 evolution:1 iteratively:2 illustrated:3 round:1 auc:1 essence:1 cosine:3 generalized:1 allowable:1 complete:1 demonstrate:1 bolles:1 interface:2 auction:1 reasoning:1 image:1 snelson:1 charles:1 common:1 absorbing:1 rotation:2 qp:1 discussed:1 interpretation:1 cup:1 kri:1 credibility:1 salganik:1 recommenders:1 marbach:1 had:4 similarity:17 longer:1 gateway:1 etc:2 pu:1 sarwar:1 own:1 inf:2 commun:1 scenario:1 verlag:1 feizi:1 inequality:1 scoring:2 greater:1 additional:2 c11:1 maximize:1 paradigm:1 recommended:9 signal:1 ii:2 recoverable:1 multiple:1 july:2 infer:3 ing:1 icdm:1 equally:1 award:1 coded:1 a1:1 barab:1 impact:3 prediction:3 involving:1 simplistic:1 ransac:3 essentially:4 metric:13 expectation:6 albert:1 iteration:6 normalization:1 achieved:1 justified:2 whereas:1 want:1 ayan:1 fine:2 remarkably:1 singular:10 country:1 pass:1 induced:5 validating:2 legend:1 spirit:1 extracting:1 presence:1 easy:1 automated:2 affect:5 fit:3 restrict:1 perfectly:1 identified:1 click:1 idea:2 regarding:2 economic:1 tradeoff:1 absent:1 whether:3 expression:3 robs:28 akin:1 suffer:1 peter:1 returned:1 speaking:1 passing:2 biotechnology:2 york:2 useful:2 clear:1 amount:1 generate:2 fz:2 schapire:1 percentage:1 nsf:1 fiction:1 estimated:2 per:5 track:1 diverse:1 broadly:1 write:1 discrete:1 affected:4 key:2 four:1 threshold:1 nevertheless:1 prevent:1 fraction:2 year:1 rtrue:34 run:2 inverse:1 fourth:1 oscar:1 poston:1 place:1 reader:1 reasonable:1 putative:2 draw:1 decision:1 ob:29 scaling:4 qui:1 comparable:1 cyan:1 internet:2 unbias:1 followed:1 distinguish:4 display:2 koren:1 elucidated:1 strength:2 adapted:1 constraint:1 perkins:1 kantor:1 personalized:2 span:1 min:2 conjecture:1 march:1 watt:1 riedl:2 cui:1 amatriain:2 making:1 s1:2 outlier:1 equation:7 discus:2 count:3 mechanism:3 tractable:2 ascending:1 end:1 serf:2 studying:2 observe:2 quarterly:1 spectral:2 simulating:1 occurrence:4 alternative:1 convolved:1 denotes:1 top:6 include:3 ensure:2 running:1 sitcom:1 instant:2 music:1 kellis:1 upcoming:1 objective:1 move:1 question:3 occurs:1 looked:2 joke:1 primary:1 diagonal:2 kth:1 link:1 recsys:1 argue:1 extent:5 consensus:2 toward:1 assuming:1 ru:17 relationship:3 equivalently:1 mostly:1 relate:1 stated:2 design:1 collective:1 ski:1 unknown:5 perform:1 recommender:44 vertical:1 snapshot:2 datasets:15 eigentaste:1 purdue:5 acknowledge:1 jin:1 displayed:3 truncated:2 heterogeneity:1 extended:1 looking:1 precise:2 genesis:1 download:2 community:1 rating:95 david:2 immigrant:1 pair:1 introduced:1 accepts:1 comedy:1 barcelona:1 pop:1 nip:1 tractably:1 trans:2 able:5 bar:4 pattern:1 sparsity:3 pujol:1 max:1 memory:1 power:2 critical:1 natural:1 ranked:4 predicting:1 indicator:2 recursion:2 representing:1 scheme:1 alcohol:3 movie:6 rated:3 improve:1 axis:9 extract:1 faced:1 review:3 understanding:3 taste:1 acknowledgement:1 relative:4 unsurprisingly:1 expect:1 suggestion:1 limitation:1 filtering:14 proportional:1 interesting:1 generation:1 age:1 degree:1 sufficient:1 beer:1 article:1 viewpoint:1 translation:1 last:1 keeping:1 formal:1 bias:1 taking:2 absolute:1 sparse:1 feedback:46 dimension:2 curve:5 world:9 evaluating:2 valid:1 calculated:1 commonly:1 coincide:1 beeradvocate:2 far:1 social:1 correlate:1 implicitly:1 preferred:1 global:1 active:1 handbook:1 discriminative:2 search:2 iterative:2 sk:11 table:4 additionally:2 nature:2 robust:1 controllable:1 career:1 improving:1 bottou:1 complex:1 constructing:1 domain:1 stc:1 did:2 linearly:1 noise:2 scored:1 categorized:1 personalize:1 roc:5 ratebeer:2 darker:1 nominated:1 mao:1 comprises:1 explicit:2 wish:1 konstan:2 lie:1 chickering:1 third:1 rk:4 theorem:2 down:1 shade:1 inset:1 mitigates:1 unperturbed:1 list:6 gupta:1 linden:1 normalizing:1 fusion:1 a3:1 intrinsic:4 intractable:1 dominates:1 deconvolution:2 incorporating:2 kr:3 workshop:2 supplement:1 magnitude:1 television:2 browsing:1 chen:2 distinguishable:1 likely:6 watch:1 recommendation:18 springer:1 truth:1 retr:1 acm:2 goal:1 viewed:3 towards:1 price:1 replace:2 absence:2 content:3 change:1 considerable:1 deconvolves:1 movielens:8 infinite:1 uniformly:1 specifically:1 decouple:1 secondary:1 svd:5 experimental:1 indicating:2 support:2 indian:4 evaluate:1 tested:1 phenomenon:1 correlated:4 |
5,840 | 6,284 | Latent Attention For If-Then Program Synthesis
Xinyun Chen?
Shanghai Jiao Tong University
Chang Liu
Richard Shin
UC Berkeley
Dawn Song
Mingcheng Chen?
UIUC
Abstract
Automatic translation from natural language descriptions into programs is a longstanding challenging problem. In this work, we consider a simple yet important sub-problem: translation from textual descriptions to If-Then programs. We
devise a novel neural network architecture for this task which we train end-toend. Specifically, we introduce Latent Attention, which computes multiplicative
weights for the words in the description in a two-stage process with the goal of
better leveraging the natural language structures that indicate the relevant parts for
predicting program elements. Our architecture reduces the error rate by 28.57%
compared to prior art [3]. We also propose a one-shot learning scenario of If-Then
program synthesis and simulate it with our existing dataset. We demonstrate a
variation on the training procedure for this scenario that outperforms the original
procedure, significantly closing the gap to the model trained with all data.
1
Introduction
A touchstone problem for computational linguistics is to translate natural language descriptions into
executable programs. Over the past decade, there has been an increasing number of attempts to
address this problem from both the natural language processing community and the programming
language community. In this paper, we focus on a simple but important subset of programs containing only one If-Then statement.
An If-Then program, which is also called a recipe, specifies a trigger and an action function, representing a program which will take the action when the trigger condition is met. On websites, such
as IFTTT.com, a user often provides a natural language description of the recipe?s functionality as
well. Recent work [16, 3, 7] studied the problem of automatically synthesizing If-Then programs
from their descriptions. In particular, LSTM-based sequence-to-sequence approaches [7] and an
approach of ensembling a neural network and logistic regression [3] were proposed to deal with this
problem. In [3], however, the authors claim that the diversity of vocabulary and sentence structures
makes it difficult for an RNN to learn useful representations, and their ensemble approach indeed
shows better performance than the LSTM-based approach [7] on the function prediction task (see
Section 2).
In this paper, we introduce a new attention architecture, called Latent Attention, to overcome this
difficulty. With Latent Attention, a weight is learned on each token to determine its importance for
prediction of the trigger or the action. Unlike standard attention methods, Latent Attention computes
the token weights in a two-step process, which aims to better capture the sentence structure. We show
that by employing Latent Attention over outputs of a bi-directional LSTM, our new Latent Attention
model can improve over the best prior result [3] by 5 percentage points from 82.5% to 87.5% when
predicting the trigger and action functions together, reducing the error rate of [3] by 28.57%.
Besides the If-Then program synthesis task proposed by [16], we are also interested in a new scenario. When a new trigger or action is released, the training data will contain few corresponding
?
?
Part of the work was done while visiting UC Berkeley.
Work was done while visiting UC Berkeley. Mingcheng Chen is currently working at Google [X].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
examples. We refer to this case as a one-shot learning problem. We show that our Latent Attention model on top of dictionary embedding combining with a new training algorithm can achieve a
reasonably good performance for the one-shot learning task.
2
If-Then Program Synthesis
If-Then Recipes. In this work, we consider an important class of simple programs called IfThen?recipes? (or recipes for short), which are very small programs for event-driven automation
of tasks. Specifically, a recipe consists of a trigger and an action, indicating that the action will be
executed when the trigger is fulfilled.
The simplicity of If-Then recipes makes it a great tool for users who may not know how to code.
Even non-technical users can specify their goals using recipes, instead of writing code in a more
full-fledged programming language. A number of websites have embraced the If-Then programming paradigm and have been hugely successful with tens of thousands of personal recipes created,
including IFTTT.com and Zapier.com. In this paper, we focus on data crawled from IFTTT.com.
IFTTT.com allows users to share their recipes publicly, along with short natural language descriptions to explain the recipes? functionality. A recipe on IFTTT.com consists of a trigger channel, a
trigger function, an action channel, an action function, and arguments for the functions. There are a
wide range of channels, which can represent entities such as devices, web applications, and IFTTTprovided services. Each channel has a set of functions representing events (i.e., trigger functions) or
action executions (i.e., action functions).
For example, an IFTTT recipe with the following description
Autosave your Instagram photos to Dropbox
has the trigger channel Instagram, trigger function Any new photo by you, action channel
Dropbox, and action function Add file from URL. Some functions may take arguments. For example, the Add file from URL function takes three arguments: the source URL, the name for the
saved file, and the path to the destination folder.
Problem Setup. Our task is similar to that in [16]. In particular, for each description, we focus on
predicting the channel and function for trigger and action respectively. Synthesizing a valid recipe
also requires generating the arguments. As argued by [3], however, the arguments are not crucial for
representing an If-Then program. Therefore, we defer our treatment for arguments generation to the
supplementary material, where we show that a simple frequency-based method can outperform all
existing approaches. In this way, our task turns into two classification problems for predicting the
trigger and action functions (or channels).
Besides the problem setup in [16], we also introduce a new variation of the problem, a one-shot
learning scenario: when some new channels or functions are initially available, there are very few
recipes using these channels and functions in the training set. We explore techniques to still achieve
a reasonable prediction accuracy on labels with very few training examples.
3
Related Work
Recently there has been increasing interests in executable code generation. Existing works have
studied generating domain-specific code, such as regular expressions [12], code for parsing input
documents [14], database queries [22, 4], commands to robots [10], operating systems [5], smartphone automation [13], and spreadsheets [8]. A recent effort considers translating a mixed natural
language and structured specification into programming code [15]. Most of these approaches rely on
semantic parsing [19, 9, 1, 16]. In particular, [16] introduces the problem of translating IFTTT descriptions into executable code, and provides a semantic parsing-based approach. Two recent work
studied approaches using sequence-to-sequence model [7] and an ensemble of a neural network and
a logistic regression model [3] to deal with this problem, and showed better performance than [16].
We show that our Latent Attention method outperforms all prior approaches. Recurrent neural networks [21, 6] along with attention [2] have demonstrated impressive results on tasks such as machine
translation [2], generating image captions [20], syntactic parsing [18] and question answering [17].
2
?
Output
Embedding
?3
weights
?
Latent Attention
Embedding
?2
?
?
Column-wise
Softmax
Description
{?? }
Prediction
?
Softmax
Weighted
Sum
Weighted
Sum
?
Active
Input
weights
?
Embedding
?1
?
Active
Attention
Softmax
Latent
Attention
Latent
Input
?
Figure 1: Network Architecture
4
4.1
Latent Attention Model
Motivation
To translate a natural language description into a program, we would like to locate the words in
the description that are the most relevant for predicting desired labels (trigger/action channels/functions). For example, in the following description
Autosave Instagram photos to your Dropbox folder
the blue text ?Instagram photos? is the most relevent for predicting the trigger. To capture this information, we can adapt the attention mechanism [2, 17] ?first compute a weight of the importance of
each token in the sentence, and then output a weighted sum of the embeddings of these tokens.
However, our intuition suggests that the weight for each token depends not only on the token itself,
but also the overall sentence structure. For example, in
Post photos in your Dropbox folder to Instagram
?Dropbox? determines the trigger, even though in the previous example, which contains almost the
same set of tokens, ?Instagram? should play this role. In this example, the prepositions such as
?to? hint that the trigger channel is specified in the middle of the description rather than at the end.
Taking this into account allows us to select ?Dropbox? over ?Instagram?.
Latent Attention is designed to exploit such clues. We use the usual attention mechanism for computing a latent weight for each token to determine which tokens in the sequence are more relevant to
the trigger or the action. These latent weights determine the final attention weights, which we call
active weights. As an example, given the presence of the token ?to?, we might look at the tokens
before ?to? to determine the trigger.
4.2
The network
The Latent Attention architecture is presented in Figure 1. We follow the convention of using lowercase letters to indicate column vectors, and capital letters for matrices. Our model takes as input
a sequence of symbols x1 , ..., xJ , with each coming from a dictionary of N words. We denote
X = [x1 , ..., xJ ]. Here, J is the maximal length of a description. We illustrate each layer of the
network below.
Latent attention layer. We assume each symbol xi is encoded as a one-hot vector of N dimensions. We can embed the input sequence X into a d-dimensional embedding sequence using
E = Embed?1 (X), where ?1 is a set of parameters. We will discuss different embedding methods in
Section 4.3. Here E is of size d ? J.
3
The latent attention layer?s output is computed as a standard softmax on top of E. Specifically,
assume that l is the J-dimensional output vector, u is a d-dimensional trainable vector, we have
l = softmax(uT Embed?1 (X))
Active attention layer. The active attention layer computes each token?s weight based on its importance for the final prediction. We call these weights active weights. We first embed X into D
using another set of parameters ?2 , i.e., D = Embed?2 (X) is of size d ? J. Next, for each token Di ,
we compute its active attention input Ai through a softmax:
Ai = softmax(V Di )
Here, Ai and Di denote the the i-th column vector of A and D respectively, and V is a trainable
parameter matrix of size J ? d. Notice that V Di = (V D)i , we can compute A by performing
column-wise softmax over V D. Here, A is of size J ? J.
The active weights are computed as the sum of Ai , weighted by the output of latent attention weight:
w=
J
X
li Ai = Al
i=1
Output representation. We use a third set of parameters ?3 to embed X into a d ? J embedding
matrix, and the final output o, a d-dimensional vector, is the sum of the embedding weighted by the
active weights:
o = Embed?3 (X)w
Prediction. We use a softmax to make the final prediction: f? = softmax(P o), where P is a
d ? M parameter matrix, and M is the number of classes.
4.3
Details
Embeddings. We consider two embedding methods for representing words in the vector space.
The first is a straightforward word embedding, i.e., Embed? (X) = ?X, where ? is a d ? N matrix
and the rows of X are one-hot vectors over the vocabulary of size N . We refer to this as ?dictionary
embedding? later in the paper. ? is not pretrained with a different dataset or objective, but initialized
randomly and learned at the same time as all other parameters. We observe that when using Latent
Attention, this simple method is effective enough to outperform some recent results [16, 7].
The other approach is to take the word embeddings, run them through a bi-directional LSTM (BDLSTM) [21], and then use the concatenation of two LSTMs? outputs at each time step as the embedding. This can take into account the context around a token, and thus the embeddings should
contain more information from the sequence than from a single token. We refer to such an approach
as ?BDLSTM embedding?. The details are deferred to the supplementary material. In our experiments, we observe that with the help of this embedding method, Latent Attention can outperform
the prior state-of-the-art.
In Latent Attention, we have three sets of embedding parameters, i.e., ?1 , ?2 , ?3 . In practice, we find
that we can equalize the three without loss of performance. Later, we will show that keeping them
separate is helpful for our one-shot learning setting.
Normalizing active weights. We find that normalizing the active weights a before computing the
output is helpful to improve the performance. Specifically, we compute the output as
w
o = Embed? (X)normalized(w) = Embed? (X)
||w||
where ||w|| is the L2 -norm of w. In our experiments, we observe that this normalization can improve
the performance by 1 to 2 points.
Padding and clipping. Latent Attention requires a fixed-length input sequence. To handle inputs
of variable lengths, we perform padding and clipping. If an input?s length is smaller than J, then we
pad it with null tokens at the end of the sequence. If an input?s length is greater than J (which is 25
in our experiements), we keep the first 12 and the last 13 tokens, and get rid of all the rest.
4
Vocabulary. We tokenize each sentence by splitting on whitespace and punctuation (e.g., ., !??0 :
; )( ), and convert all characters into lowercase. We keep all punctuation symbols as tokens too. We
map each of the top 4,000 most frequent tokens into themselves, and all the rest into a special token
hUNKi. Therefore our vocabulary size is 4,001. Our implementation has no special handling for
typos.
5
If-Then Program Synthesis Task Evaluation
In this section, we evaluate our approaches with several baselines and previous work [16, 3, 7].
We use the same crawler from Quirk et al. [16] to crawl recipes from IFTTT.com. Unfortunately,
many recipes are no longer available. We crawled all remaining recipes, ultimately obtaining 68,083
recipes for the training set. [16] also provides a list of 5,171 recipes for validation, and 4,294 recipes
for test. All test recipes come with labels from Amazon Mechanical Turk workers. We found that
only 4,220 validation recipes and 3,868 test recipes remain available. [16] defines a subset of test
recipes, where each recipe has at least 3 workers agreeing on its labels from IFTTT.com, as the gold
testset. We find that 584 out of the 758 gold test recipes used in [16] remain available. We refer to
these recipes as the gold test set. We present the data statistics in the supplementary material.
Evaluated methods. We evaluate two embedding methods as well as the effectiveness of different
attention mechanisms. In particular, we compare no attention, standard attention, and Latent Attention. Therefore, we evaluate six architectures in total. When using dictionary embedding with no
attention, for each sentence, we sum the embedding of each word, then pass it through a softmax
layer for prediction. For convenience, we refer to such a process as standard softmax. For BDLSTM with no attention, we concatenate final states of forward and backward LSTMs, then pass the
concatenation through a softmax layer for prediction. The two embedding methods with standard
attention mechanism [17] are described in the supplementary material. The Latent Attention models
have been presented in Section 4.
Training details. For architectures with no attention, they were trained using a learning rate of
0.01 initially, which is multiplied by 0.9 every 1,000 time steps. Gradients with L2 norm greater
than 5 were scaled down to have norm 5. For architectures with either standard attention mechanism
or Latent Attention, they were trained using a learning rate of 0.001 without decay, and gradients
with L2 norm greater than 40 were scaled down to have norm 40. All models were trained using
Adam [11]. All weights were initialized uniformly randomly in [?0.1, 0.1]. Mini-batches were
randomly shuffled during training. The mini-batch size is 32 and the embedding vector size d is 50.
Results. Figure 2 and Figure 3 present the results of prediction accuracy on channel and function
respectively. Three previous works? results are presented as well. In particular, [16] is the first work
introducing the If-Then program synthesis task. [7] investigates the approaches using sequence-tosequence models, while [3] proposes an approach to ensemble a feed-forward neural network and a
logistic regression model. The numerical values for all data points can be found in the supplementary
material.
For our six architectures, we use 10 different random initializations to train 10 different models. To
ensemble k models, we choose the best k models on the validation set among the 10 models, and
average their softmax outputs as the ensembled output. For the three existing approaches [16, 7, 3],
we choose the best results from these papers.
We train the model to optimize for function prediction accuracy. The channel accuracy in Figure 2
is computed in the following way: to predict the channel, we first predict the function (from a list of
all functions in all channels), and the channel that the function belongs to is returned as the predicted
channel. We observe that
? Latent Attention steadily improves over standard attention architectures and no attention
ones using either embedding method.
? In our six evaluated architectures, ensembling improves upon using only one model significantly.
? When ensembling more than one model, BDLSTM embeddings perform better than dictionary embeddings. We attribute this to that for each token, BDLSTM can encode the
5
Figure 2: Accuracy for Channel
Figure 3: Accuracy for Channel+Function
information of its surrounding tokens, e.g., phrases, into its embedding, which is thus more
effective.
? For the channel prediction task in Figure 2, all architectures except dictionary embedding
with no attention (i.e., Dict) can outperform [16]. Ensembling only 2 BDLSTM models
with either standard attention or Latent Attention is enough to achieve better performance
than prior art [7]. By ensembling 10 BDLSTM+LA models, we can improve the latest
results [7] and [3] by 1.9 points and 2.5 point respectively.
? For the function prediction task in Figure 3, all our six models (including Dict) outperform [16]. Further, ensembling 9 BDLSTM+LA can improve the previous best results [3]
by 5 points. In other words, our approach reduces the error rate of [3] by 28.57%.
6
One-Shot Learning
We consider the scenario when websites such as IFTTT.com release new channels and functions.
In such a scenario, for a period of time, there will be very few recipes using the newly available
channels and fucntions; however, we would still like to enable synthesizing If-Then programs using
these new functions. The rarity of such recipes in the training set creates a challenge similar to
the one-shot learning setting. In this scenario, we want to leverage the large amount of recipes
for existing functions, and the goal is to achieve a good prediction accuracy for the new functions
without significantly compromising the overall accuracy.
6.1
Datasets to simulate one-shot learning
To simulate this scenario with our existing dataset, we build two one-shot variants of it as follows.
We first split the set of trigger functions into two sets, based on their frequency. The top100 set
contains the top 100 most frequently used trigger functions, while the non-top100 set contains the
rest.
Given a set of trigger functions S, we can build a skewed training set to include all recipes using
functions in S, and 10 randomly chosen recipes for each function not in S. We denote this skewed
training set created based on S as (S, S), and refer to functions in S as majority functions and
functions in S as minority functions. In our experiments, we construct two new training sets by
choosing S to be the top100 set and non-top100 set respectively. We refer to these two training sets
as SkewTop100 and SkewNonTop100.
The motivation for creating these datasets is to mimic two different scenarios. On one hand, SkewTop100 simulates the case that at the startup phase of a service, popular recipes are first published,
while less frequently used recipes are introduced later. On the other hand, SkewNonTop100 captures
the opposite situation. The statistics for these two training sets are presented in the supplementary
material. While SkewTop100 is more common in real life, the SkewNonTop100 training set is only
15.73% of the entire training set, and thus is more challenging.
6
85
70
65
60
55
50
45
40
35
30
80
75
70
65
60
55
All
?Trigger
?F unction
NonTop100
? Trigger
?F unction
All
?Trigger
?F unction
(a) Trigger Function Accuracy (SkewTop100)
Top100
?Trigger
?F unction
(b) Trigger Function Accuracy (SkewNonTop100)
Figure 4: One-shot learning experiments. For each column XY-Z, X from {B, D} represents whether
the embedding is BDLSTM or Dictionary; Y is either empty, or is from {A, L}, meaning that either
no attention is used, or standard attention or Latent Attention is used; and Z is from {S, 2N, 2},
denoting standard training, na??ve two-step training or two-step training.
6.2
Training
We evaluate three training methods as follows, where the last one is specifically designed for attention mechanisms. In all methods, the training data is either SkewTop100 or SkewNonTop100.
Standard training. We do not modify the training process.
Na??ve two-step training. We do standard training first. Since the data is heavily skewed, the model
may behave poorly on the minority functions. From a training set (S, S), we create a rebalanced
dataset, by randomly choosing 10 recipes for each function in S and all recipes using functions in
S. Therefore, the numbers of recipes using each function are similar in this rebalanced dataset. We
recommence the training using this rebalanced training dataset in the second step.
Two-step training. We still do standard training first, and then create the rebalanced dataset in
the similar way as that in na??ve two-step training. However, in the second step, instead of training
the entire network, we keep the attention parameters fixed, and train only the parameters in the
remaining part of the model. Take the Latent Attention model depicted in Figure 1 as an example. In
the second step, we keep parameters ?1 , ?2 , u, and V fixed, and only update ?3 and P while training
on the rebalanced dataset. We based this procedure on the intuition that since the rebalanced dataset
is very small, fewer trainable parameters enable easier training.
6.3
Results
We compare the three training strategies using our proposed models. We omit the no attention models, which do not perform better than attention models and cannot be trained using two-step training.
We only train one model per strategy, so the results are without ensembling. The results are presented in Figure 4. The concrete values can be found in the supplementary material. For reference,
the best single BDLSTM+LA model can achieve 89.38% trigger function accuracy: 91.11% on
top100 functions, and 85.12% on non-top100 functions. We observe that
? Using two-step training, both the overall accuracy and the accuracy on the minority functions are generally better than using standard training and na??ve two-step training.
? Latent Attention outperforms standard attention when using the same training method.
? The best Latent Attention model (Dict+LA) with two-step training can achieve 82.71% and
64.84% accuracy for trigger function on the gold test set, when trained on the SkewTop100
and SkewNonTop100 datasets respectively. For comparison, when using the entire training
dataset, trigger function accuracy of Dict+LA is 89.38%. Note that the SkewNonTop100
dataset accounts for only 15.73% of the entire training dataset.
? For SkewTop100 training set, Dict+LA model can achieve 78.57% accuracy on minority
functions in gold test set. This number for using the full training dataset is 85.12%, although the non-top100 recipes in SkewTop100 make up only 30.54% of those in the full
training set.
7
Post your Instagram photos
label weights
label weights
Correct Predictions
(a)
latent
trigger
action
trigger
action
to
Tumblr
0.75
0.14
(c)
latent
trigger
action
trigger
action
Instagram
>
flickr
(d)
0.12
0.81
0.67
0.2
0.1
0.16
0.13
0.7
Instagram.Any_new_photo_by_you
Flickr.Upload_public_photo_from_URL
(e)
latent
trigger
action
(f)
latent
trigger
action
Download
any
0.11
0.44
0.19
0.34
Instagram
(b) Spreadsheet with
the
daily
weather
0.15
0.8
0.33
0.76
Instagram.Any_new_photo_by_you
Tumblr.Create_a_photo_post
,
triggered
0.57
0.21
0.54
at
sunrise.
0.47
Weather.Sunrise
Google_Drive.Add_row_to_spreadsheet
If send IFTTT a text tagged #todo, from cell phone then quick add event to google calendar.
0.16
0.42
0.17
0.15 0.29
0.23
0.12
0.1
0.18 0.14
0.23
SMS.Send_IFTTT_an_SMS_tagged
Google_Calendar.Quick_add_event
weights
weights
Misclassified Examples
photos
of
me
to
0.83
dropbox
0.24
0.18
0.39
to
0.92
Wordpress
0.85
0.8
Truth (Trigger)
Facebook.You_are_tagged_in_a_photo
Prediction
Android_Photos.Any_new_photo
Truth (Action)
WordPress.Create_a_post
Prediction
WordPress.Create_a_photo_post
Figure 5: Examples of attention weights output by Dict+LA. latent, trigger, and action indicate the latent weights and active weights for the trigger and the action respectively. Low values less
than 0.1 are omitted.
7
Empirical Analysis of Latent Attention
We show some correctly classified and misclassified examples in Figure 5 along with their attention
weights. The weights are computed from a Dict+LA model. We choose Dict+LA instead of BDLSTM+LA, because the BDLSTM embedding of each token does not correspond to the token itself
only ? it will contain the information passing from previous and subsequent tokens in the sequence.
Therefore, the attention of BDLSTM+LA is not as easy to interpret as Dict+LA.
The latent weights are those used to predict the action functions. In correctly classified examples,
we observe that the latent weights are assigned to the prepositions that determine which parts of the
sentence are associated with the trigger or the action. An interesting example is (b), where a high
latent weight is assigned to ?,?. This indicates that LA considers ?,? as informative as other English
words such as ?to?. We observe the similar phenomenon in Example (c), where token ?>? has the
highest latent weight.
In several misclassified examples, we observe that some attention weights may not be assigned
correctly. In Example (e), although there is nowhere explicitly showing the trigger should be using a Facebook channel, the phrase ?photo of me? hints that ?me? should be tagged in the photo.
Therefore, a human can infer that this should use a function from the Facebook channel, called
?You are tagged in a photo?. The Dict+LA model does not learn this association from the training data. In this example, we expect that the model should assign high weights onto the phrase
?of me?, but this is not the case, i.e., the weights assigned to ?of? and ?me? are 0.01 and 0.007
respectively. This shows that the Dict+LA model does not correlate these two words with the
You are tagged in a photo function. BDLSTM+LA, on the other hand, can jointly consider the
two tokens, and make the correct prediction.
Example (h) is another example where outside knowledge might help: Dict+LA predicts the trigger
function to be Create a post since it does not learn that Instagram only consists of photos (and
low weight was placed on ?Instagram? when predicting the trigger anyway). Again, BDLSTM+LA
can predict this case correctly.
Acknowledgements. We thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No.
TWC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the author(s) and do not necessarily reflect
the views of the National Science Foundation and DARPA.
8
References
[1] Y. Artzi. Broad-coverage ccg semantic parsing with amr. In EMNLP, 2015.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. arXiv preprint arXiv:1409.0473, 2014.
[3] I. Beltagy and C. Quirk. Improved semantic parsers for if-then statements. In ACL, 2016.
[4] J. Berant, A. Chou, R. Frostig, and P. Liang. Semantic parsing on freebase from questionanswer pairs. In EMNLP, 2013.
[5] S. R. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. Reinforcement learning for
mapping instructions to actions. In ACL, 2009.
[6] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural
networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[7] L. Dong and M. Lapata. Language to logical form with neural attention. In ACL, 2016.
[8] S. Gulwani and M. Marron. Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation. In SIGMOD, 2014.
[9] B. K. Jones, M. Johnson, and S. Goldwater. Semantic parsing with bayesian tree transducers.
In ACL, 2012.
[10] R. J. Kate, Y. W. Wong, and R. J. Mooney. Learning to transform natural to formal languages.
In AAAI, 2005.
[11] D. Kingma and J. Ba.
arXiv:1412.6980, 2014.
Adam: A method for stochastic optimization.
arXiv preprint
[12] N. Kushman and R. Barzilay. Using semantic unification to generate regular expressions from
natural language. In NAACL, 2013.
[13] V. Le, S. Gulwani, and Z. Su. Smartsynth: Synthesizing smartphone automation scripts from
natural language. In MobiSys, 2013.
[14] T. Lei, F. Long, R. Barzilay, and M. C. Rinard. From natural language specifications to program
input parsers. In ACL, 2013.
[15] W. Ling, E. Grefenstette, K. M. Hermann, T. Kocisk?y, A. Senior, F. Wang, and P. Blunsom.
Latent predictor networks for code generation. CoRR, 2016.
[16] C. Quirk, R. Mooney, and M. Galley. Language to code: Learning semantic parsers for if-thisthen-that recipes. In ACL, 2015.
[17] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In NIPS, 2015.
[18] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign
language. In NIPS, 2015.
[19] Y. W. Wong and R. J. Mooney. Learning for semantic parsing with statistical machine translation. In NAACL, 2006.
[20] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show,
attend and tell: Neural image caption generation with visual attention. arXiv preprint
arXiv:1502.03044, 2015.
[21] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[22] J. M. Zelle. Learning to parse database queries using inductive logic programming. In AAAI,
1996.
9
| 6284 |@word middle:1 norm:5 instruction:1 shot:10 liu:1 contains:3 denoting:1 document:1 fa8750:1 outperforms:3 existing:6 past:1 com:9 unction:4 yet:1 parsing:8 concatenate:1 numerical:1 subsequent:1 informative:1 designed:2 update:1 sukhbaatar:1 fewer:1 website:3 device:1 kushman:1 short:2 provides:3 along:3 transducer:1 consists:3 introduce:3 indeed:1 themselves:1 frequently:2 uiuc:1 kiros:1 salakhutdinov:1 touchstone:1 automatically:1 increasing:2 spain:1 null:1 finding:1 berkeley:3 every:1 interactive:1 zaremba:1 scaled:2 grant:2 omit:1 before:2 service:2 attend:1 modify:1 path:1 koo:1 might:2 acl:6 blunsom:1 initialization:1 studied:3 suggests:1 challenging:2 bi:2 range:1 practice:1 procedure:3 shin:1 empirical:2 rnn:1 significantly:3 weather:2 word:10 regular:2 get:1 convenience:1 cannot:1 onto:1 context:1 writing:1 wong:2 optimize:1 map:1 demonstrated:1 quick:1 reviewer:1 send:1 straightforward:1 attention:66 hugely:1 latest:1 simplicity:1 splitting:1 amazon:1 embedding:25 handle:1 anyway:1 variation:2 trigger:45 play:1 user:4 caption:2 programming:6 heavily:1 parser:3 element:1 nowhere:1 berant:1 predicts:1 database:2 role:1 preprint:5 wang:1 capture:3 thousand:1 highest:1 valuable:1 equalize:1 intuition:2 personal:1 ultimately:1 trained:6 upon:2 creates:1 darpa:2 surrounding:1 train:5 jiao:1 effective:2 query:2 zemel:1 tell:1 startup:1 choosing:2 outside:1 encoded:1 supplementary:7 calendar:1 ifthen:1 grammar:1 statistic:2 syntactic:1 jointly:2 itself:2 transform:1 final:5 sequence:14 triggered:1 propose:1 coming:1 maximal:1 frequent:1 relevant:3 combining:1 translate:3 poorly:1 achieve:7 gold:5 description:16 recipe:40 sutskever:2 empty:1 generating:3 adam:2 help:2 illustrate:1 recurrent:3 quirk:3 barzilay:3 coverage:1 predicted:1 indicate:3 come:1 met:1 convention:1 hermann:1 saved:1 functionality:2 attribute:1 compromising:1 correct:2 stochastic:1 human:1 enable:2 translating:2 material:9 opinion:1 argued:1 assign:1 ensembled:1 anonymous:1 around:1 great:1 mapping:1 predict:4 claim:1 dictionary:7 released:1 omitted:1 label:6 currently:1 wordpress:3 create:3 tool:1 weighted:5 aim:1 freebase:1 rather:1 command:1 crawled:2 encode:1 release:1 focus:3 indicates:1 baseline:1 chou:1 helpful:2 lowercase:2 foreign:1 entire:4 initially:2 pad:1 misclassified:3 interested:1 overall:3 classification:1 among:1 proposes:1 art:3 softmax:14 special:2 uc:3 construct:1 represents:1 broad:1 look:1 jones:1 mimic:1 richard:1 few:4 hint:2 randomly:5 ve:4 national:2 phase:1 attempt:1 interest:1 evaluation:2 deferred:1 introduces:1 punctuation:2 worker:2 daily:1 xy:1 unification:1 tree:1 initialized:2 desired:1 column:5 modeling:1 clipping:2 artzi:1 phrase:3 introducing:1 subset:2 predictor:1 successful:1 galley:1 johnson:1 too:1 marron:1 cho:2 lstm:4 destination:1 dong:1 synthesis:6 together:1 concrete:1 na:4 again:1 aaai:2 reflect:1 containing:1 choose:3 emnlp:2 creating:1 questionanswer:1 chung:1 li:1 account:3 diversity:1 lapata:1 automation:3 kate:1 explicitly:1 depends:1 multiplicative:1 later:3 view:1 script:1 defer:1 publicly:1 accuracy:16 who:1 ensemble:4 correspond:1 directional:2 goldwater:1 bayesian:1 published:1 mooney:3 classified:2 explain:1 flickr:2 facebook:3 petrov:1 frequency:2 turk:1 steadily:1 associated:1 di:4 newly:1 dataset:13 treatment:1 popular:1 logical:1 knowledge:1 ut:1 improves:2 embraced:1 feed:1 follow:1 specify:1 improved:1 done:2 though:1 evaluated:2 stage:1 working:1 hand:3 web:1 lstms:2 su:1 parse:1 google:2 defines:1 logistic:3 lei:1 name:1 naacl:2 contain:3 normalized:1 inductive:1 tagged:4 assigned:4 shuffled:1 regularization:1 semantic:9 deal:2 during:1 skewed:3 demonstrate:1 image:2 wise:2 meaning:1 novel:1 dawn:1 recently:1 common:1 executable:3 shanghai:1 association:1 interpret:1 refer:7 ai:5 automatic:1 closing:1 frostig:1 language:18 robot:1 specification:2 impressive:1 operating:1 longer:1 rebalanced:6 add:3 align:1 recent:4 showed:1 belongs:1 driven:1 phone:1 scenario:9 manipulation:1 life:1 devise:1 greater:3 determine:5 paradigm:1 period:1 full:3 reduces:2 infer:1 technical:1 adapt:1 long:1 post:3 prediction:18 variant:1 regression:3 spreadsheet:3 arxiv:10 represent:1 normalization:1 cell:1 zettlemoyer:1 want:1 source:1 crucial:1 rest:3 unlike:1 typo:1 file:3 comment:1 twc:1 simulates:1 bahdanau:1 leveraging:1 effectiveness:1 call:2 presence:1 leverage:1 split:1 embeddings:6 enough:2 easy:1 bengio:3 xj:2 architecture:12 opposite:1 whether:1 expression:2 six:4 url:3 gulwani:2 padding:2 effort:1 song:1 returned:1 passing:1 action:29 useful:1 generally:1 amount:1 ten:1 generate:1 specifies:1 outperform:5 percentage:1 notice:1 toend:1 fulfilled:1 per:1 correctly:4 blue:1 capital:1 backward:1 sum:6 convert:1 run:1 letter:2 you:3 almost:1 reasonable:1 whitespace:1 investigates:1 layer:7 courville:1 your:4 simulate:3 argument:6 todo:1 performing:1 structured:1 smaller:1 remain:2 character:1 agreeing:1 turn:1 discus:1 mechanism:6 know:1 end:5 photo:12 gulcehre:1 available:5 multiplied:1 observe:8 batch:2 original:1 tumblr:2 top:4 remaining:2 linguistics:1 include:1 rinard:1 exploit:1 sigmod:1 build:2 objective:1 question:1 amr:1 kaiser:1 strategy:2 experiements:1 usual:1 visiting:2 gradient:2 separate:1 thank:1 entity:1 concatenation:2 majority:1 me:5 considers:2 minority:4 besides:2 code:9 length:5 mini:2 liang:1 difficult:1 executed:1 setup:2 unfortunately:1 statement:2 synthesizing:4 ba:2 implementation:1 perform:3 gated:1 datasets:3 sm:1 sunrise:2 dropbox:7 behave:1 situation:1 hinton:1 locate:1 community:2 download:1 introduced:1 pair:1 mechanical:1 specified:1 sentence:7 learned:2 textual:1 barcelona:1 kingma:1 nip:3 address:1 below:1 challenge:1 program:20 including:2 memory:1 hot:2 event:3 dict:12 natural:13 difficulty:1 rely:1 predicting:7 representing:4 improve:5 created:2 text:2 prior:5 l2:3 acknowledgement:1 loss:1 expect:1 mixed:1 generation:4 interesting:1 validation:3 foundation:2 smartphone:2 share:1 translation:5 row:1 preposition:2 token:27 placed:1 last:2 keeping:1 english:1 supported:1 formal:1 senior:1 fledged:1 wide:1 taking:1 overcome:1 dimension:1 vocabulary:4 valid:1 crawl:1 tosequence:1 computes:3 author:2 forward:2 folder:3 clue:1 reinforcement:1 longstanding:1 testset:1 employing:1 branavan:1 correlate:1 keep:4 logic:1 active:12 rid:1 xi:1 fergus:1 latent:45 decade:1 learn:3 reasonably:1 channel:25 obtaining:1 necessarily:1 domain:1 motivation:2 ling:1 x1:2 xu:1 ensembling:7 rarity:1 tong:1 sub:1 answering:1 third:1 down:2 relevent:1 embed:10 specific:1 showing:1 symbol:3 instagram:14 list:2 decay:1 normalizing:2 corr:1 importance:3 ccg:1 execution:1 chen:4 gap:1 easier:1 depicted:1 explore:1 visual:1 vinyals:2 expressed:1 partially:1 pretrained:1 chang:1 recommendation:1 truth:2 determines:1 grefenstette:1 weston:1 goal:3 specifically:5 except:1 reducing:1 uniformly:1 called:4 total:1 pas:2 la:18 indicating:1 select:1 crawler:1 evaluate:4 trainable:3 phenomenon:1 handling:1 |
5,841 | 6,285 | A Multi-step Inertial Forward?Backward Splitting
Method for Non-convex Optimization
Jingwei Liang and Jalal M. Fadili
Normandie Univ, ENSICAEN, CNRS, GREYC
{Jingwei.Liang,Jalal.Fadili}@greyc.ensicaen.fr
Gabriel Peyr?
CNRS, DMA, ENS Paris
[email protected]
Abstract
We propose a multi-step inertial Forward?Backward splitting algorithm for minimizing the sum of two non-necessarily convex functions, one of which is proper
lower semi-continuous while the other is differentiable with a Lipschitz continuous
gradient. We first prove global convergence of the algorithm with the help of the
Kurdyka-?ojasiewicz property. Then, when the non-smooth part is also partly
smooth relative to a smooth submanifold, we establish finite identification of the
latter and provide sharp local linear convergence analysis. The proposed method is
illustrated on several problems arising from statistics and machine learning.
1
Introduction
1.1
Non-convex non-smooth optimization
Non-smooth optimization has proved extremely useful to all quantitative disciplines of science
including statistics and machine learning. A common trend in modern science is the increase in size
of datasets, which drives the need for more efficient optimization schemes. For large-scale problems
with non-smooth and possibly non-convex terms, it is possible to generalize gradient descent with
the Forward?Backward (FB) splitting scheme [3] (a.k.a proximal gradient descent), which includes
projected gradient descent as a sub-case.
Formally, we equip Rn the n-dimensional Euclidean space with the standard inner product h?, ?i and
associated norm || ? || respectively. Our goal is the generic minimization of composite objectives of
the form
def
minn ?(x) = R(x) + F (x) ,
(P)
x?R
where we have
(A.1) R : Rn ? R ? {+?} is the penalty function which is proper lower semi-continuous (lsc),
and bounded from below;
(A.2) F : Rn ? R is the loss function which is finite-valued, differentiable and its gradient ?F
is L-Lipschitz continuous.
Throughout, no convexity is imposed neither on R nor on F .
The class of problems we consider is that of non-smooth and non-convex optimization problems.
Here are some examples that are of particular relevance to problems in regression, machine learning
and classification.
Example 1.1 (Sparse regression). Let A ? Rm?n , y ? Rm , ? > 0, and ||x||0 is the `0 pseudo-norm
(see Example 4.1). Consider (see e.g. [11])
min
x?Rn
1
2
||y ? Ax|| + ?||x||0 .
2
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(1.1)
Example 1.2 (Principal component pursuit (PCP)). The PCP problem [9] aims at decomposing a
given matrix into sparse and low-rank components
min
(xs ,xl )?(Rn1 ?n2 )2
1
2
||y ? xs ? xl ||F + ?1 ||xs ||0 + ?2 rank(xl ),
2
(1.2)
where || ? ||F is the Frobenius norm and ?1 and ?2 > 0.
Example 1.3 (Sparse Support Vector Machines). One would like to find a linear decision function
which minimizes the objective
min
(b,x)?R?Rn
1
m
Pm
i=1
G(hx, zi i + b, yi ) + ?||x||0 ,
(1.3)
where for i = 1, ? ? ? , m, (zi , yi ) ? Rn ? {?1} is the training set, and G is a smooth loss function
with Lipschitz-continuous gradient such as the squared hinge loss G(?
yi , yi ) = max(0, 1 ? y?i yi )2 or
??
yi yi
the logistic loss G(?
yi , yi ) = log(1 + e
).
(Inertial) Forward?Backward The Forward?Backward splitting method for solving (P) reads
xk+1 ? prox?k R xk ? ?k ?F (xk ) ,
(1.4)
where ?k > 0 is a descent step-size, and
1
2
def
2
prox?R (?) = Argminx?Rn ||x ? ?|| + ?R(x),
(1.5)
denotes the proximity operator of R. prox?R (x) is non-empty under (A.1) and is set-valued in
general. Lower-boundedness of R can be relaxed by requiring e.g. coercivity of the objective in (1.5).
Since the pioneering work of Polyak [24] on the heavy-ball method approach to gradient descent,
several works have adapted this methodology to various optimization schemes. For instance, the
inertial proximal point algorithm [1, 2], or the inertial FB methods [22, 21, 4, 20]. The FISTA scheme
[5, 10] also belongs to this class. See [20] for a detailed account.
The non-convex case In the context of non-convex optimization, [3] was the first to establish
convergence of the FB iterates when the objective ? satisfies the Kurdyka-?ojasiewicz property1 .
Following their footprints, [8, 23] established convergence of the special inertial schemes in [22] in
the non-convex setting.
1.2
Contributions
In this paper, we introduce a novel inertial scheme (Algorithm 1) and study its global and local
properties to solve the non-smooth and non-convex optimization problem (P). More precisely, our
main contributions can be summarized as follows.
A globally convergent general inertial scheme We propose a general multi-step inertial FB (MiFB)
algorithm to solve (P). This algorithm is very flexible as it allows higher memory and even negative
inertial parameters (unlike previous work [20]). Global convergence of any bounded sequence
of iterates to a critical point is proved when the objective ? is lower-bounded and satisfies the
Kurdyka-?ojasiewicz property.
Local convergence properties under partial smoothness Under the additional assumptions that
the smooth part is locally C 2 around a critical point x? (where xk ? x? ), and that the non-smooth
component R is partly smooth (see Definition 3.1) relative to an active submanifold Mx? , we show
that Mx? can be identified in finite time, i.e. xk ? Mx? for all k large enough. Building on this
finite identification result, we provide a sharp local linear convergence analysis and we characterize
precisely the corresponding convergence rate which, in particular, reveals the role of Mx? . Moreover,
this local convergence analysis naturally opens the door to higher-order acceleration, since under
some circumstances, the original problem (P) is eventually equivalent to locally minimizing ? on
Mx? , and partial smoothness implies that ? is actually C 2 on Mx? .
1
We are aware of the works existing on convergence of the objective sequence ?(xk ) of FB, including
rates, in the non-smooth and non-convex setting. But given that, in general, this does not say anything about
convergence of the sequence of iterates xk , they are irrelevant to our discussion.
2
Algorithm 1: A Multi-step Inertial Forward?Backward (MiFB)
Initial: s ? 1 is an integer, I = {0, 1, . . . , s ? 1}, x0 ? Rn and x?s = . . . = x?1 = x0 .
repeat
Let 0 < ? ? ?k ? ? < L1 , {a0,k , a1,k , . . .} ?] ? 1, 2]s , {b0,k , b1,k , . . .} ?] ? 1, 2]s :
ya,k = xk + i?I ai,k (xk?i ? xk?i?1 ),
P
yb,k = xk + i?I bi,k (xk?i ? xk?i?1 ),
xk+1 ? prox?k R ya,k ? ?k ?F (yb,k ) .
P
(1.6)
(1.7)
k = k + 1;
until convergence;
1.3
Notations
Throughout the paper, N is the set of non-negative integers. For a nonempty closed convex set
? ? Rn , ri(?) is its relative interior, and par(?) = R(? ? ?) is the subspace parallel to it.
def
Let R : Rn ? R ? {+?} be a lsc function, its domain is defined as dom(R) = {x ? Rn : R(x) <
+?}, and it is said to be proper if dom(R) 6= ?. We need the following notions from variational
analysis, see e.g. [25] for details. Given x ? dom(R), the Fr?chet subdifferential ? F R(x) of R at x,
1
is the set of vectors v ? Rn that satisfies lim inf z?x, z6=x ||x?z||
(R(z) ? R(x) ? hv, z ? xi) ? 0. If
F
x?
/ dom(R), then ? R(x) = ?. The limiting-subdifferential (or simply subdifferential) of R at x,
def
written as ?R(x), is defined as ?R(x) = {v ? Rn : ?xk ? x, R(xk ) ? R(x), vk ? ? F R(xk ) ?
def
v}. Denote dom(?R) = {x ? Rn : ?R(x) 6= ?}. Both ? F R(x) and ?R(x) are closed, with ? F R(x)
F
convex and ? R(x) ? ?R(x) [25, Proposition 8.5]. Since R is lsc, it is (subdifferentially) regular at
x if and only if ? F R(x) = ?R(x) [25, Corollary 8.11].
An lsc function R is r-prox-regular at x
? ? dom(R) for v? ? ?R(?
x) if ?r > 0 such that R(x0 ) >
1
0 2
0
0
?, R(x) near R(?
x) and v ? ?R(x) near v?.
R(x) + hv, x ? xi ? 2r ||x ? x || ?x, x near x
A necessary condition for x to be a minimizer of R is 0 ? ?R(x). The set of critical points of R is
crit(R) = {x ? Rn : 0 ? ?R(x)}.
2
2.1
Global convergence of MiFB
Kurdyka-?ojasiewicz property
Let R : Rn ? R ? {+?} be a proper lsc function. For ?1 , ?2 such that ?? < ?1 < ?2 < +?,
def
define the set [?1 < R < ?2 ] = {x ? Rn : ?1 < R(x) < ?2 }.
Definition 2.1. R is said to have the Kurdyka-?ojasiewicz property at x
? ? dom(R) if there exists
? ?]0, +?], a neighbourhood U of x
? and a continuous concave function ? : [0, ?[? R+ such that
(i) ?(0) = 0, ? is C 1 on ]0, ?[, and for all s ?]0, ?[, ?0 (s) > 0;
(ii) for all x ? U ? [R(?
x) < R < R(?
x) + ?], the Kurdyka-?ojasiewicz inequality holds
0
? R(x) ? R(?
x) dist 0, ?R(x) ? 1.
(2.1)
Proper lsc functions which satisfy the Kurdyka-?ojasiewicz property at each point of dom(?R) are
called KL functions.
Roughly speaking, KL functions become sharp up to reparameterization via ?, called a desingularizing
function for R. Typical KL functions are the class of semi-algebraic functions, see [6, 7]. For instance,
the `0 pseudo-norm and the rank function (see Example 1.1-1.3 and Section 4.1) are indeed KL.
2.2
Global convergence
Let ?, ? > 0 be two constants. For i ? I and k ? N, define the following quantities,
def
?k =
2
sb2 L2
def
def sa
def
1 ? ?k L ? ? ? ??k
, ? = lim inf ?k and ?i,k = i,k + i,k , ?i = lim sup ?i,k . (2.2)
2?k
2?k ?
2?
k?N
k?N
3
Theorem 2.2 (Convergence of MiFB (Algorithm 1)). For problem (P), suppose that (A.1)-(A.2)
hold, and moreover ? is a proper lsc KL function. For Algorithm 1, choose ?, ?, ?k , ai,k , bi,k such
that
P
def
?=??
? > 0.
(2.3)
i?I i
Then each bounded sequence {xk }k?N generated by MiFB satisfies
P
(i) {xk }k?N has finite length, i.e. k?N ||xk ? xk?1 || < +?;
(ii) There exists a critical point x? ? crit(?) such that limk?? xk = x? .
(iii) If ? has the KL property at a global minimizer x? , then starting sufficiently close from x? ,
any sequence {xk }k?N converges to a global minimum of ? and satisfies (i).
The proof is detailed in the supplementary material.
Remark 2.3.
(i) The convergence result holds true for any real Hilbert space. The boundedness of {xk }k?N
is automatically ensured under standard assumptions such as coercivity of ?.
(ii) It is known from [13] that if the desingularizing function ? = C? t? , C > 0 and ? ? [ 12 , 1[,
then global linear convergence of the objective and the iterates can be derived. However, we
will not pursue this further since our main interest is local linear convergence.
(iii) Unlike existing work, negative inertial parameters are allowed by Theorem 2.2.
(iv) When ai,k ? 0 and bi,k ? 0, i.e. the case of FB splitting, condition (2.3) holds naturally as
long as ? < L1 which recovers the case of [3];
(v) From (2.2) and (2.3), we conclude the following:
(a) s = 1: if b0,k ? b, a0,k ? a (i.e. constant inertial parameters), then (2.3) implies that
1??L?????
b2
a2
+ 2?/L
.
a, b belong to an ellipsoid: 2??
2 < ? =
2?
(b) When s ? 2, for each i ? I, let bi,k = ai,k ? ai (i.e. constant symmetric
inertial
P
1
1
2
parameters), then (2.3) means that the ai ?s live in a ball: ( 2??
+ 2?/L
a
2)
i?I i < ?.
An empirical approach for inertial parameters Besides Theorem 2.2, we also provide an empirical
bound for the choice of the inertial parameters. Consider the setting: ?k ? ? ?]0, 1/L[
P and bi,k =
ai,k ? ai ?] ? 1, 2[, i ? I. We have the following empirical bound for the summand i?I ai :
P
1/L??
.
(2.4)
i ai ? 0, min 1, |2??1/L|
To ensure the convergence {xk }k?N , an online updating rule should be applied together with the
empirical bound. More precisely,
choose aiPaccording to (2.4). Then for each k ? N,P
let bi,k = ai,k
P
and choose ai,k such that i ai,k = min{ i ai , ck } where ck > 0 is such that {ck i?I ||xk?i ?
xk?i?1 ||}k?N is summable. For instance, ck = k1+q P ||xck?i ?xk?i?1 || , c > 0, q > 0.
i?I
P
Note that the allowed choices
of the summand i ai by (2.4) is larger thanP
those of Theorem 2.2. For
P
2
instance, (2.4) allows i ai = 1 for ? ?]0, 3L
]. While for Theorem 2.2, i ai = 1 can be reached
only when ? ? 0.
3
3.1
Local convergence properties of MiFB
Partial smoothness
Let M ? Rn be a C 2 -smooth submanifold, let TM (x) the tangent space of M at any point x ? M.
Definition 3.1. The function R : Rn ? R ? {+?} is C 2 -partly smooth at x
? ? M relative to M
for v? ? ?R(?
x) 6= ? if M is a C 2 -submanifold around x
?, and
(i)
(ii)
(iii)
(iv)
(Smoothness): R restricted to M is C 2 around x
?;
(Regularity): R is regular at all x ? M near x
? and R is r-prox-regular at x
? for v?;
(Sharpness): TM (?
x) = par(?R(x))? ;
(Continuity): The set-valued mapping ?R is continuous at x
? relative to M.
We denote the class of partly smooth functions at x relative to M for v as PSFx,v (M). Partial
smoothness was first introduced in [15] and its directional version stated here is due to [18, 12].
Prox-regularity is sufficient to ensure that the partly smooth submanifolds are locally unique [18,
Corollary 4.12], [12, Lemma 2.3 and Proposition 10.12].
4
3.2
Finite activity identification
One of the key consequences of partial smoothness is finite identification of the partial smoothness
submanifold associated to R for problem (P). This is formalized in the following statement.
Theorem 3.2 (Finite activity identification). Suppose that Algorithm 1 is run under the conditions
of Theorem 2.2, such that the generated sequence {xk }k?N converges to a critical point x? ? crit(?).
Assume that R ? PSFx? ,??F (x? ) (Mx? ) and the non-degeneracy condition
??F (x? ) ? ri ?R(x? ) ,
(ND)
holds. Then, xk ? Mx? for all k large enough.
See the supplementary material for the proof. This result generalizes that of [20] to the non-convex
case and multiple inertial steps.
3.3
Local linear convergence
Given ? ?]0, L1 [ and a critical point x? ? crit(?), let Mx? be a C 2 -smooth submanifold and
def
R ? PSFx? ,??F (x? ) (Mx? ). Denote Tx? = TMx? (x? ) and the following matrices which are all
symmetric,
def
H = ?PTx? ?2 F (x? )PTx? ,
def
G = Id ? H,
def
Q = ??2Mx? ?(x? )PTx? ? H,
(3.1)
?2Mx? ?
where
is the Riemannian Hessian of ? along the submanifold Mx? (readers may refer to
the supplementary material from more details on differential calculus on Riemannian manifolds).
To state our local linear convergence result, the following assumptions will play a key role.
Restricted injectivity Besides the local C 2 -smoothness assumption on F , following the idea of
[19, 20], we assume the restricted injectivity condition,
ker ?2 F (x? ) ? Tx? = {0}.
(RI)
Positive semi-definiteness of Q Assume that Q is positive semi-definite, i.e. ?h ? Tx? ,
hh, Qhi ? 0.
(3.2)
def
Under (3.2), Id + Q is symmetric positive definite, hence invertible, we denote P = (Id + Q)?1 .
Convergent parameters The parameters of MiFB (Algorithm 1), are convergent, i.e.
ai,k ? ai , bi,k ? bi , ?i ? I and ?k ? ? ? [?, min{?, r?}],
(3.3)
where r? < r, and r is the prox-regularity modulus of R (see Definition 3.1).
Remark 3.3.
(i) Condition (3.2) can be met by various non-convex functions, such as polyhedral functions,
including the `0 pseudo-norm. For the rank function, it is also observed that this condition
holds in our numerical experiments of Section 4.
(ii) Condition (3.3) asserts that both the inertial parameters (ai,k , bi,k ) and the step-size ?k
should converge to some limit points, and this condition cannot be relaxed in general.
(iii) It can be shown that conditions (3.2) and (RI) together imply that x? is a local minimizer
of ? in (P), and ? grows at least quadratically near x? . The arguments to prove this are
essentially adapted from those used to show [20, Proposition 4.1(ii)].
We need the following notations:
def
def
M0 = (a0 ? b0 )P + (1 + b0 )P G, Ms = ?(as?1 ? bs?1 )P ? bs?1 P G,
def
Mi = ? (ai?1 ? ai ) ? (bi?1 ? bi ) P ? (bi?1 ? bi )P G, i = 1, ..., s ? 1,
?
?
?
?
M0 ? ? ? Ms?1 Ms
xk ? x?
Id ? ? ?
0
0 ?
def ?
def ?
?
..
M =?
, dk = ?
..
.. ?
?.
..
.
? ...
.
.
. ?
?
xk?s ? x
0 ???
Id
0
5
(3.4)
Theorem 3.4 (Local linear convergence). Suppose that Algorithm 1is run under the setting of
Theorem 3.2. Moreover, assume that (RI), (3.2) and (3.3) hold. Then for all k large enough,
dk+1 = M dk + o(||dk ||).
(3.5)
If ?(M ) < 1, then given any ? ?]?(M ), 1[, there exists K ? N such that ?k ? K,
||xk ? x? || = O(?k?K ).
(3.6)
?
In particular, if s = 1, then ?(M ) < 1 if R is locally polyhedral around x or if a0 = b0 .
See the supplementary material for the proof.
Remark 3.5.
(i) When s = 1, ?(M ) can be given explicitly in terms of the parameters of the algorithm (i.e. a0 ,
b0 and ?), see [20, Section 4.2] for details. However, the spectral analysis of M becomes
much more complicated to get for s ? 2, where the analysis of at least cubic equations are
involved. Therefore, for the sake of brevity, we shall skip the detailed discussion here.
(ii) When s = 1, it was shown in [20] that the optimal convergence
rate that can be obtained
?
by 1-step inertial scheme with fixed ? is ??s=1 = 1 ? 1 ? ? ?, where from condition (RI),
continuity of ?2 F at x? implies that there exists ? > 0 and a neighbourhood of x? such that
2
hh, ?2 F (x? )hi ? ? ||h|| , for all h ? Tx? . As we will see in the numerical experiments of
Section 4, such a rate can be improved by our multi-step inertial scheme. Taking s = 2 for
example, we will show that for?
a certain class of functions, the optimal local linear rate is
3
close to or even is ??s=2 = 1 ? 1 ? ? ?, which is obviously faster than ??s=1 .
(iii) Though it can be satisfied for many problems in practice, the restricted injectivity (RI) can
be removed for some penalties R, for instance, when R is locally polyhedral near x? .
4
Numerical experiments
In this section, we illustrate our results with some numerical experiments carried out on the problems
in Example 1.1, 1.2 and 1.3.
4.1
Examples of KL and partly smooth functions
All the objectives ? in the above mentioned examples are continuous KL functions. Indeed, in
Example 1.1 and 1.2, ? is the sum of semi-algebraic functions which is also semi-algebraic. In
Example 1.3, ? is also algebraic when G is the squared hinge loss, and definable in an o-minimal
structure for the logistic loss (see e.g. [26] for material on o-minimal structures).
Moreover, R is partly smooth in all these examples as we show now.
Example 4.1 (`0 pseudo-norm). The `0 pseudo-norm is locally constant. Moreover, it is regular on
Rn ([14, Remark 2]) and its subdifferential is given by (see [14, Theorem 1])
?||x||0 = span (ei )i?supp(x)c ,
where (ei )i=1,...,n is the standard basis, and supp(x) = i : xi 6= 0 . The proximity operator of
`0 -norm is given by hard-thresholding,
?
?
if |z| > 2?,
?z
?
prox?||x||0 (z) = sign(z)[0, z] if |z| = 2?,
?
?
0
if |z| < 2?.
It can then be easily verified that the `0 pseudo-norm is partly smooth at any x relative to the subspace
Mx = Tx = z ? Rn : supp(z) ? supp(x) .
It is also prox-regular at x for any bounded v ? ?||x||0 . Note also condition (ND) is automatically
verified and that the Riemannian gradient and Hessian along Tx of || ? ||0 vanish.
Example 4.2 (Rank). The rank function is the spectral extension of `0 pseudo-norm to matrixvalued data x ? Rn1 ?n2 [17]. Consider a singular value decomposition (SVD) of x, i.e. x =
U diag(?(x))V ? , where U = {u1 , . . . , un }, V = {v1 , . . . , vn } are orthonormal matrices, and
6
def
?(x) = (?i (x))i=1,...,n is the vector of singular values. By definition, rank(x) = ||?(x)||0 . Thus the
rank function is partly smooth relative at x to the set of fixed rank matrices
Mx = z ? Rn1 ?n2 : rank(z) = rank(x) ,
which is a C 2 -smooth submanifold [16]. The tangent space of Mx at x is
TMx (x) = Tx = z ? Rn1 ?n2 : u?i zvj = 0, for all r < i ? n1 , r < j ? n2 ,
The rank function is also regular its subdifferential reads
?rank(x) = U ? ||?(x)||0 V ? = U span (ei )i?supp(?(x))c V ? ,
which is a vector space (see [14, Theorem 4 and Proposition 1]). The proximity operator of rank
function amounts to applying hard-thresholding to the singular values. Observe that by definition of
Mx , the Riemannian gradient and Hessian of the rank function along Mx also vanish.
For Example 1.2, it is worth noting from the above examples and separability of the regularizer that
the latter is also partly smooth relative to the cartesian product of the partial smoothness submanifolds
of `0 and the rank function.
4.2
Experimental results
For the problem in Example 1.1, we generated y = Axob + ? with m = 48, n = 128, the entries of
A are i.i.d. zero-mean and unit variance Gaussian, xob is 8-sparse, and ? ? Rm is an additive noise
with small variance.
For the problem in Example 1.2, we generated y = xs + xl + ?, with n1 = n2 = 50, xs is 250-sparse,
and the rank of xl is 5, and ? is an additive noise with small variance.
For Example 1.3, we generated m = 64 training samples with n = 96-dimensional feature space.
For all presented numerical results, 3 different settings were tested:
? the FB method, with ?k ? 0.3/L, noted as ?FB?;
? MiFB with s = 1, bk = ak ? a and ?k ? 0.3/L, noted as ?1-iFB?;
? MiFB with s = 2, bi,k = ai,k ? ai , i = 0, 1 and ?k ? 0.3/L, noted as ?2-iFB?.
Tightness of theoretical prediction The convergence profiles of ||xk ? x? || are shown in Figure 1.
As it can be seen from all the plots, finite identification and local linear convergence indeed occur. The
positions of the green dots indicate the iteration from which xk numerically identifies the submanifold
Mx? . The solid lines (?P?) represents practical observations, while the dashed lines (?T?) denotes
theoretical predictions.
As the Riemannian Hessians of `0 and the rank both vanish in all examples, our predicted rates
coincide exactly with the observed ones (same slopes for the dashed and solid lines).
10
-2
10
10
10
10
-2
10
-6
-6
-10
100
200
300
400
500
600
700
800
900
1000
2
10
FB, P
FB, T
1 - iFB P
1 - iFB, T
2 - iFB, P
2 - iFB, T
2 - iFB optimal
p
1 ! 1 ! =.
1 ! (1 ! = .)1=3
10
-10
50
100
150
k
(a) Sparse regression
10
200
k
(b) PCP
250
300
350
2
FB, P
FB, T
1 - iFB P
1 - iFB, T
2 - iFB, P
2 - iFB, T
2 - iFB optimal
p
1 ! 1 ! =.
1 ! (1 ! = .)1=3
-2
kxk ! x? k
FB, P
FB, T
1 - iFB P
1 - iFB, T
2 - iFB, P
2 - iFB, T
2 - iFB optimal
p
1 ! 1 ! =.
p
1 ! 3 1 ! =.
kxk ! x? k
10
2
kxk ! x? k
10
10
-6
-10
200
400
600
800
1000 1200 1400 1600 1800 2000
k
(c) Sparse SVM
Figure 1: Finite identification and local linear convergence of MiFB under different inertial settings
in terms of ||xk ? x? ||. ?P? stands for practical observation and ?T? indicates the theoretical estimate.
We fix ?k ? 0.3/L for all tests. For the 2 inertial schemes, inertial parameters are first chosen such
that (2.3) holds. The position of the green dot in each plot indicates the iteration beyond which
identification of Mx? occurs.
Comparison of the methods Under the tested settings, we draw the following remarks on the
comparison of the inertial schemes:
7
? The MiFB scheme is much faster than FB both globally and locally. Finite activity identification also occurs earlier for MiFB than for FB;
? Comparing the two MIFB inertial schemes, ?2-iFB? outperforms ?1-iFB?, showing the
advantages of a 2-step inertial scheme over the 1-step one.
Optimal first-order method To highlight the potential of multiple steps in MiFB, for the ?2-iFB?
scheme, we also added an example where we locally optimized the rate for the inertial parmeters. See
the magenta lines all the examples, where the solid line corresponds?to the observed profile for the
optimal inertial
?3 parameters, the dashed line stands for the rate 1 ? 1 ? ? ?, and the dotted line is
that of 1 ? 1 ? ? ?, which shows indeed that a faster linear rate can be obtained owing to multiple
inertial parameters.
We refer to [20, Section 4.5] for the optimal choice of inertial parameters for the case s = 1.
The empirical bound (2.4) and inertial steps s We now present a short comparison of the empirical
bound (2.4) of inertial parameters and different choices of s under bigger choice of ? = 0.8/L. MiFB
with 3 inertial steps, i.e. s = 3, is added which is noted as ?3-iFB?, see the magenta line in Figure 2.
Similar to the above experiments, we choose bi,k = ai,k ? ai , i ? I, and ?Thm 2.2? means that ai ?s
are chosen according to Theorem 2.2, while ?Bnd (2.4)? means that ai ?s are chosen based on the
empirical bound (2.4). We can infer from Figure 2 the following. Compared to the results in Figure 1,
a bigger choice of ? leads to faster convergence. Yet still, under the same choice of ?, MiFB is faster
than FB both locally and globally; For either ?Thm 2.2? or ?Bnd (2.4)?, the performance
of the three
P
MiFB schemes are close, this is mainly due to the fact that values of the sum i?I ai for each scheme
are close. Then between ?Thm
2.2? and ?Bnd (2.4)?, ?Bnd (2.4)? shows faster convergence result,
P
since the allowed value of i?I ai of (2.4) is bigger than that of Theorem 2.2. It should be noted that,
P
P
2
when ? ?]0, 3L
], the largest value of i?I ai allowed by (2.4) is 1. If we choose i?I ai equal or
very close to 1, then it can be observed in practice that MiFB locally oscillates, which is a well-known
property of the FISTA scheme [5, 10]. We refer to [20, Section 4.4] for discussions of the properties
of such oscillation behaviour.
10
-2
10
10
10
10
-2
10
-6
-6
-10
100
200
300
400
500
600
700
2
10
FB
1 - iFB, Thm 2.2
2 - iFB, Thm 2.2
3 - iFB, Thm 2.2
1 - iFB, Bnd (2.4)
2 - iFB, Bnd (2.4)
3 - iFB, Bnd (2.4)
10
-10
20
40
60
k
(a) Sparse regression
10
80
k
(b) PCP
100
120
2
FB
1 - iFB, Thm 2.2
2 - iFB, Thm 2.2
3 - iFB, Thm 2.2
1 - iFB, Bnd (2.4)
2 - iFB, Bnd (2.4)
3 - iFB, Bnd (2.4)
-2
kxk ! x? k
FB
1 - iFB, Thm 2.2
2 - iFB, Thm 2.2
3 - iFB, Thm 2.2
1 - iFB, Bnd (2.4)
2 - iFB, Bnd (2.4)
3 - iFB, Bnd (2.4)
kxk ! x? k
10
2
kxk ! x? k
10
10
-6
-10
100
200
300
400
500
600
700
k
(c) Sparse SVM
Figure 2: Comparison of MiFB under different inertial settings. We fix ?k ? 0.8/L for all tests. For
the three inertial schemes, the inertial parameters were chosen such that (2.3) holds.
Acknowledgments
This work was partly supported by the European Research Council (ERC project SIGMA-Vision).
References
[1] F. Alvarez. On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM
Journal on Control and Optimization, 38(4):1102?1119, 2000.
[2] F. Alvarez and H. Attouch. An inertial proximal method for maximal monotone operators via discretization
of a nonlinear oscillator with damping. Set-Valued Analysis, 9(1-2):3?11, 2001.
[3] H. Attouch, J. Bolte, and B. F. Svaiter. Convergence of descent methods for semi-algebraic and tame
problems: proximal algorithms, Forward?Backward splitting, and regularized Gauss?Seidel methods.
Mathematical Programming, 137(1-2):91?129, 2013.
[4] H. Attouch, J. Peypouquet, and P. Redont. A dynamical approach to an inertial Forward?Backward
algorithm for convex minimization. SIAM J. Optim., 24(1):232?256, 2014.
8
[5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[6] J. Bolte, A. Daniilidis, and A. Lewis. The ?ojasiewicz inequality for nonsmooth subanalytic functions with
applications to subgradient dynamical systems. SIAM Journal on Optimization, 17(4):1205?1223, 2007.
[7] J. Bolte, A. Daniilidis, O. Ley, and L. Mazet. Characterizations of Lojasiewicz inequalities: subgradient
flows, talweg, convexity. Transactions of the American Mathematical Society, 362(6):3319?3363, 2010.
[8] R. I. Bo?t, E. R. Csetnek, and S. C. L?szl?. An inertial Forward?Backward algorithm for the minimization
of the sum of two nonconvex functions. EURO Journal on Computational Optimization, pages 1?23, 2014.
[9] E. J. Cand?s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM
(JACM), 58(3):11, 2011.
[10] A. Chambolle and C. Dossal. On the convergence of the iterates of the ?Fast Iterative Shrinkage/Thresholding Algorithm?. Journal of Optimization Theory and Applications, pages 1?15, 2015.
[11] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in
the presence of noise. IEEE Trans. Inform. Theory, 52(1):6?18, 2006.
[12] D. Drusvyatskiy and A. S. Lewis. Optimality, identifiability, and sensitivity. Mathematical Programming,
pages 1?32, 2013.
[13] P. Frankel, G. Garrigos, and J. Peypouquet. Splitting methods with variable metric for kurdyka??ojasiewicz
functions and general convergence rates. Journal of Optimization Theory and Applications, 165(3):874?900,
2015.
[14] H. Y. Le. Generalized subdifferentials of the rank function. Optimization Letters, 7(4):731?743, 2013.
[15] A. S. Lewis. Active sets, nonsmoothness, and sensitivity. SIAM J. on Optimization, 13(3):702?725, 2003.
[16] A. S. Lewis and J. Malick. Alternating projections on manifolds. Mathematics of Operations Research,
33(1):216?234, 2008.
[17] A. S. Lewis and H. S. Sendov. Twice differentiable spectral functions. SIAM Journal on Matrix Analysis
and Applications, 23(2):368?386, 2001.
[18] A. S. Lewis and S. Zhang. Partial smoothness, tilt stability, and generalized Hessians. SIAM Journal on
Optimization, 23(1):74?94, 2013.
[19] J. Liang, J. Fadili, and G. Peyr?. Local linear convergence of Forward?Backward under partial smoothness.
In Advances in Neural Information Processing Systems, pages 1970?1978, 2014.
[20] J. Liang, J. Fadili, and G. Peyr?. Activity identification and local linear convergence of Forward?Backwardtype methods. arXiv:1503.03703, 2015.
[21] D. A. Lorenz and T. Pock. An inertial Forward?Backward algorithm for monotone inclusions. Journal of
Mathematical Imaging and Vision, 51(2):311?325, 2014.
[22] A. Moudafi and M. Oliny. Convergence of a splitting inertial proximal method for monotone operators.
Journal of Computational and Applied Mathematics, 155(2):447?454, 2003.
[23] P. Ochs, Y. Chen, T. Brox, and T. Pock. iPiano: inertial proximal algorithm for nonconvex optimization.
SIAM Journal on Imaging Sciences, 7(2):1388?1419, 2014.
[24] B. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational
Mathematics and Mathematical Physics, 4(5):1?17, 1964.
[25] R. T. Rockafellar and R. Wets. Variational analysis, volume 317. Springer Verlag, 1998.
[26] L. van den Dries. Tame topology and o-minimal structures, volume 248 of Mathematrical Society Lecture
Notes. Cambridge Univiversity Press, New York, 1998.
9
| 6285 |@word version:1 norm:10 nd:2 open:1 calculus:1 decomposition:1 solid:3 boundedness:2 initial:1 outperforms:1 existing:2 comparing:1 discretization:1 optim:1 yet:1 pcp:4 written:1 numerical:5 additive:2 plot:2 xk:36 short:1 ojasiewicz:9 iterates:5 characterization:1 zhang:1 mathematical:5 along:3 become:1 differential:1 prove:2 polyhedral:3 introduce:1 x0:3 indeed:4 roughly:1 cand:1 nor:1 dist:1 multi:5 globally:3 automatically:2 redont:1 becomes:1 spain:1 project:1 bounded:5 moreover:5 notation:2 submanifolds:2 minimizes:1 pursue:1 pseudo:7 quantitative:1 concave:1 exactly:1 ensured:1 rm:3 oscillates:1 control:1 unit:1 positive:3 local:17 pock:2 limit:1 consequence:1 matrixvalued:1 ak:1 id:5 twice:1 dissipative:1 bi:15 unique:1 practical:2 acknowledgment:1 practice:2 definite:2 footprint:1 ker:1 empirical:7 composite:1 projection:1 jingwei:2 regular:7 get:1 cannot:1 interior:1 close:5 operator:5 context:1 live:1 applying:1 equivalent:1 imposed:1 fadili:4 starting:1 convex:15 sharpness:1 formalized:1 splitting:8 recovery:1 rule:1 orthonormal:1 reparameterization:1 stability:1 notion:1 limiting:1 suppose:3 play:1 programming:2 trend:1 updating:1 ochs:1 observed:4 role:2 hv:2 removed:1 mentioned:1 tame:2 convexity:2 chet:1 dom:8 solving:1 crit:4 basis:1 easily:1 various:2 tx:7 regularizer:1 univ:1 subanalytic:1 fast:2 supplementary:4 valued:4 solve:2 say:1 larger:1 tightness:1 elad:1 statistic:2 online:1 obviously:1 sequence:6 differentiable:3 advantage:1 propose:2 product:2 maximal:1 fr:3 frobenius:1 asserts:1 convergence:36 empty:1 regularity:3 converges:2 help:1 illustrate:1 b0:6 sa:1 predicted:1 skip:1 implies:3 indicate:1 met:1 owing:1 ley:1 material:5 ipiano:1 hx:1 behaviour:1 fix:2 proposition:4 extension:1 hold:9 proximity:3 around:4 sufficiently:1 wright:1 mapping:1 m0:2 a2:1 wet:1 council:1 largest:1 minimization:3 gaussian:1 aim:1 ck:4 shrinkage:2 jalal:2 corollary:2 ax:1 derived:1 vk:1 rank:19 indicates:2 mainly:1 cnrs:2 a0:5 classification:1 flexible:1 malick:1 ussr:1 special:1 brox:1 equal:1 aware:1 represents:1 definable:1 nonsmooth:1 summand:2 modern:1 parmeters:1 beck:1 argminx:1 n1:2 interest:1 ensicaen:2 szl:1 partial:9 necessary:1 desingularizing:2 damping:1 iv:2 euclidean:1 overcomplete:1 theoretical:3 minimal:3 instance:5 earlier:1 teboulle:1 entry:1 submanifold:9 peyr:3 characterize:1 proximal:6 svaiter:1 dossal:1 siam:8 sensitivity:2 physic:1 discipline:1 invertible:1 together:2 squared:2 satisfied:1 rn1:4 choose:5 possibly:1 summable:1 american:1 li:1 supp:5 account:1 potential:1 prox:10 summarized:1 b2:1 includes:1 rockafellar:1 satisfy:1 explicitly:1 closed:2 sup:1 reached:1 parallel:1 complicated:1 identifiability:1 slope:1 contribution:2 variance:3 dry:1 directional:1 generalize:1 identification:10 worth:1 drive:1 daniilidis:2 inform:1 definition:6 involved:1 naturally:2 associated:2 proof:3 recovers:1 riemannian:5 degeneracy:1 mi:1 proved:2 lim:3 hilbert:2 inertial:42 actually:1 higher:2 methodology:1 improved:1 alvarez:2 yb:2 though:1 chambolle:1 until:1 ei:3 nonlinear:1 nonsmoothness:1 continuity:2 logistic:2 greyc:2 grows:1 modulus:1 building:1 attouch:3 requiring:1 true:1 subdifferentials:1 hence:1 read:2 symmetric:3 alternating:1 illustrated:1 noted:5 anything:1 m:3 generalized:2 l1:3 variational:2 novel:1 common:1 tilt:1 volume:2 belong:1 numerically:1 refer:3 cambridge:1 ai:32 smoothness:11 pm:1 mathematics:3 erc:1 inclusion:1 peypouquet:2 dot:2 stable:1 belongs:1 irrelevant:1 inf:2 certain:1 nonconvex:2 verlag:1 inequality:3 yi:9 frankel:1 seen:1 minimum:1 additional:1 relaxed:2 injectivity:3 converge:1 dashed:3 semi:8 ii:7 multiple:3 infer:1 seidel:1 smooth:24 faster:6 long:1 tmx:2 bigger:3 a1:1 prediction:2 regression:4 circumstance:1 essentially:1 vision:2 metric:1 arxiv:1 iteration:3 subdifferential:5 singular:3 unlike:2 limk:1 flow:1 integer:2 near:6 noting:1 door:1 presence:1 iii:5 enough:3 zi:2 identified:1 topology:1 polyak:2 inner:1 idea:1 tm:2 penalty:2 algebraic:5 speaking:1 hessian:5 york:1 remark:5 gabriel:2 useful:1 detailed:3 sb2:1 amount:1 locally:10 dotted:1 sign:1 arising:1 shall:1 key:2 neither:1 verified:2 backward:11 ptx:3 v1:1 imaging:3 subgradient:2 monotone:3 sum:4 run:2 inverse:1 letter:1 throughout:2 reader:1 vn:1 oscillation:1 draw:1 decision:1 def:22 bound:6 hi:1 convergent:3 activity:4 adapted:2 occur:1 precisely:3 ri:7 sake:1 u1:1 argument:1 extremely:1 min:6 span:2 optimality:1 according:1 ball:2 garrigos:1 separability:1 drusvyatskiy:1 b:2 den:1 restricted:4 equation:1 eventually:1 nonempty:1 hh:2 lojasiewicz:1 pursuit:1 decomposing:1 generalizes:1 operation:1 observe:1 generic:1 spectral:3 neighbourhood:2 original:1 denotes:2 ensure:2 hinge:2 k1:1 establish:2 society:2 objective:8 added:2 quantity:1 xck:1 occurs:2 said:2 gradient:9 mx:20 subspace:2 manifold:2 equip:1 length:1 besides:2 minn:1 ellipsoid:1 minimizing:3 liang:4 statement:1 sigma:1 negative:3 stated:1 proper:6 observation:2 datasets:1 finite:11 descent:6 peyre:1 rn:21 sharp:3 thm:12 introduced:1 bk:1 paris:1 kl:8 optimized:1 quadratically:1 established:1 barcelona:1 nip:1 trans:1 beyond:1 below:1 dynamical:2 pioneering:1 normandie:1 including:3 max:1 memory:1 green:2 critical:6 regularized:1 scheme:19 imply:1 identifies:1 carried:1 speeding:1 l2:1 tangent:2 relative:9 loss:6 par:2 highlight:1 lecture:1 sufficient:1 thresholding:4 heavy:1 repeat:1 supported:1 sendov:1 taking:1 sparse:10 van:1 stand:2 fb:20 forward:12 projected:1 coincide:1 transaction:1 temlyakov:1 global:8 active:2 reveals:1 b1:1 conclude:1 xi:3 continuous:8 un:1 iterative:2 z6:1 robust:1 lsc:7 necessarily:1 european:1 domain:1 diag:1 coercivity:2 main:2 noise:3 profile:2 n2:6 dma:1 allowed:4 euro:1 en:2 cubic:1 definiteness:1 sub:1 position:2 xl:5 vanish:3 theorem:13 magenta:2 showing:1 x:5 dk:4 svm:2 exists:4 lorenz:1 cartesian:1 chen:1 bolte:3 ifb:39 simply:1 jacm:1 kurdyka:8 kxk:6 bo:1 bnd:13 springer:1 corresponds:1 minimizer:3 satisfies:5 lewis:6 acm:1 ma:1 goal:1 acceleration:1 donoho:1 oscillator:1 lipschitz:3 hard:2 fista:2 typical:1 principal:2 lemma:1 called:2 partly:11 svd:1 ya:2 experimental:1 gauss:1 formally:1 support:1 latter:2 brevity:1 relevance:1 tested:2 |
5,842 | 6,286 | Barzilai-Borwein Step Size for Stochastic Gradient
Descent
Conghui Tan
The Chinese University of Hong Kong
[email protected]
Shiqian Ma
The Chinese University of Hong Kong
[email protected]
Yu-Hong Dai
Chinese Academy of Sciences, Beijing, China
[email protected]
Yuqiu Qian
The University of Hong Kong
[email protected]
Abstract
One of the major issues in stochastic gradient descent (SGD) methods is how to
choose an appropriate step size while running the algorithm. Since the traditional
line search technique does not apply for stochastic optimization methods, the
common practice in SGD is either to use a diminishing step size, or to tune a step
size by hand, which can be time consuming in practice. In this paper, we propose
to use the Barzilai-Borwein (BB) method to automatically compute step sizes
for SGD and its variant: stochastic variance reduced gradient (SVRG) method,
which leads to two algorithms: SGD-BB and SVRG-BB. We prove that SVRG-BB
converges linearly for strongly convex objective functions. As a by-product, we
prove the linear convergence result of SVRG with Option I proposed in [10], whose
convergence result has been missing in the literature. Numerical experiments
on standard data sets show that the performance of SGD-BB and SVRG-BB is
comparable to and sometimes even better than SGD and SVRG with best-tuned
step sizes, and is superior to some advanced SGD variants.
1
Introduction
The following optimization problem, which minimizes the sum of cost functions over samples from a
finite training set, appears frequently in machine learning:
n
1X
fi (x),
min F (x) ?
n i=1
(1)
where n is the sample size, and each fi : Rd ? R is the cost function corresponding to the i-th sample
data. Throughout this paper, we assume that each fi is convex and differentiable, and the function F
is strongly convex. Problem (1) is challenging when n is extremely large so that computing F (x) and
?F (x) for given x is prohibited. Stochastic gradient descent (SGD) method and its variants have
been the main approaches for solving (1). In the t-th iteration of SGD, a random training sample it is
chosen from {1, 2, . . . , n} and the iterate xt is updated by
xt+1 = xt ? ?t ?fit (xt ),
(2)
where ?fit (xt ) denotes the gradient of the it -th component function at xt , and ?t > 0 is the step size
(a.k.a. learning rate). In (2), it is usually assumed that ?fit is an unbiased estimation to ?F , i.e.,
E[?fit (xt ) | xt ] = ?F (xt ).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(3)
However, it is known that the total number of gradient evaluations of SGD depends on the variance
of the stochastic gradients and it is of sublinear convergence rate for strongly convex and smooth
problem (1). As a result, many works along this line have been focusing on designing variants of
SGD that can reduce the variance and improve the complexity. Some popular methods include the
stochastic average gradient (SAG) method [16], the SAGA method [7], the stochastic dual coordinate
ascent (SDCA) method [17], and the stochastic variance reduced gradient (SVRG) method [10].
These methods are proven to converge linearly on strongly convex problems.
As pointed out by Le Roux et al. [16], one important issue regarding to stochastic algorithms that has
not been fully addressed in the literature, is how to choose an appropriate step size ?t while running
the algorithm. In classical gradient descent method, the step size is usually obtained by employing
line search techniques. However, line search is computationally prohibited in stochastic gradient
methods because one only has sub-sampled information of function value and gradient. As a result,
for SGD and its variants used in practice, people usually use a diminishing step size ?t , or use a
best-tuned fixed step size. Neither of these two approaches can be efficient.
Some recent works that discuss the choice of step size in SGD are summarized as follows. AdaGrad
[8] scales the gradient by the square root of the accumulated magnitudes of the gradients in the past
iterations, but this still requires to decide a fixed step size ?. [16] suggests a line search technique
on the component function fik (x) selected in each iteration, to estimate step size for SAG. [12]
suggests performing line search for an estimated function, which is evaluated by a Gaussian process
with samples fit (xt ). [13] suggests to generate the step sizes by a given function with an unknown
parameter, and to use the online SGD to update this unknown parameter.
Our contributions in this paper are in several folds.
(i) We propose to use the Barzilai-Borwein (BB) method to compute the step size for SGD and
SVRG. The two new methods are named as SGD-BB and SVRG-BB, respectively. The per-iteration
computational cost of SGD-BB and SVRG-BB is almost the same as SGD and SVRG, respectively.
(ii) We prove the linear convergence of SVRG-BB for strongly convex function. As a by-product,
we show the linear convergence of SVRG with Option I (SVRG-I) proposed in [10]. Note that in
[10] only convergence of SVRG with Option II (SVRG-II) was given, and the proof for SVRG-I has
been missing in the literature. However, SVRG-I is numerically a better choice than SVRG-II, as
demonstrated in [10].
(iii) We conduct numerical experiments for SGD-BB and SVRG-BB on solving logistic regression
and SVM problems. The numerical results show that SGD-BB and SVRG-BB are comparable to and
sometimes even better than SGD and SVRG with best-tuned step sizes. We also compare SGD-BB
with some advanced SGD variants, and demonstrate that our method is superior.
The rest of this paper is organized as follows. In Section 2 we briefly introduce the BB method
in the deterministic setting. In Section 3 we propose our SVRG-BB method, and prove its linear
convergence for strongly convex function. As a by-product, we also prove the linear convergence of
SVRG-I. In Section 4 we propose our SGD-BB method. A smoothing technique is also implemented
to improve the performance of SGD-BB. Finally, we conduct some numerical experiments for
SVRG-BB and SGD-BB in Section 5.
2
The Barzilai-Borwein Step Size
The BB method, proposed by Barzilai and Borwein in [2], has been proven to be very successful
in solving nonlinear optimization problems. The key idea behind the BB method is motivated by
quasi-Newton methods. Suppose we want to solve the unconstrained minimization problem
min f (x),
x
(4)
where f is differentiable. A typical iteration of quasi-Newton methods for solving (4) is:
xt+1 = xt ? Bt?1 ?f (xt ),
(5)
where Bt is an approximation of the Hessian matrix of f at the current iterate xt . The most important
feature of Bt is that it must satisfy the so-called secant equation: Bt st = yt , where st = xt ? xt?1
and yt = ?f (xt ) ? ?f (xt?1 ) for t ? 1. It is noted that in (5) one needs to solve a linear system,
which may be time consuming when Bt is large and dense.
2
One way to alleviate this burden is to use the BB method, which replaces Bt by a scalar matrix (1/?t )I.
However, one cannot choose a scalar ?t such that the secant equation holds with Bt = (1/?t )I.
Instead, one can find ?t such that the residual of the secant equation, i.e., k(1/?t )st ? yt k22 , is
minimized, which leads to the following choice of ?t :
?t = kst k22 / s>
(6)
t yt .
Therefore, a typical iteration of the BB method for solving (4) is
xt+1 = xt ? ?t ?f (xt ),
(7)
where ?t is computed by (6).
For convergence analysis, generalizations and variants of the BB method, we refer the interested
readers to [14, 15, 6, 9, 4, 5, 3] and references therein. Recently, BB method has been successfully
applied for solving problems arising from emerging applications, such as compressed sensing [21],
sparse reconstruction [20] and image processing [19].
3
Barzilai-Borwein Step Size for SVRG
We see from (7) and (6) that the BB method does not need any parameter and the step size is computed
while running the algorithm. This has been the main motivation for us to work out a black-box
stochastic gradient descent method that can compute the step size automatically without requiring
any parameters. In this section, we propose to incorporate the BB step size to SVRG, which leads to
the SVRG-BB method.
3.1
SVRG-BB Method
Stochastic variance reduced gradient (SVRG) is a variant of SGD proposed in [10], which utilizes a
variance reduction technique to alleviate the impact of the random samplings of the gradients. SVRG
computes the full gradient ?F (x) of (1) in every m iterations, where m is a pre-given integer, and
the full gradient is then used for generating stochastic gradients with lower variance in the next m
iterations (the next epoch). In SVRG, the step size ? needs to be provided by the user. According to
[10], the choice of ? depends on the Lipschitz constant of F , which is usually difficult to estimate in
practice.
Our SVRG-BB algorithm is described in Algorithm 1. The only difference between SVRG and
SVRG-BB is that in the latter we use BB method to compute the step size ?k , instead of using a
prefixed ? as in SVRG.
Algorithm 1 SVRG with BB step size (SVRG-BB)
Parameters: update frequency m, initial point x
?0 , initial step size ?0 (only used in the first epoch)
for k = 0, P
1, ? ? ? do
n
xk )
gk = n1 i=1 ?fi (?
if k > 0 then
1
?k = m
? k?
xk ? x
?k?1 k22 /(?
xk ? x
?k?1 )> (gk ? gk?1 )
(4)
end if
x0 = x
?k
for t = 0, ? ? ? , m ? 1 do
Randomly pick it ? {1, . . . , n}
xt+1 = xt ? ?k (?fit (xt ) ? ?fit (?
xk ) + gk )
end for
Option I: x
?k+1 = xm
Option II: x
?k+1 = xt for randomly chosen t ? {1, . . . , m}
end for
Remark 1. A few remarks are in demand for the SVRG-BB algorithm.
(i) If we always set ?k = ? in SVRG-BB instead of using (4), then it reduces to the original SVRG.
(ii) One may notice that ?k is equal to the step size computed by the BB formula (6) divided by m.
This is because in the inner loop for updating xt , m unbiased gradient estimators are added to x0 to
3
get xm .
(iii) For the first epoch of SVRG-BB, a step size ?0 needs to be specified. However, we observed from
our numerical experiments that the performance of SVRG-BB is not sensitive to the choice of ?0 .
(iv) The BB step size can also be naturally incorporated to other SVRG variants, such as SVRG with
batching [1].
3.2
Linear Convergence Analysis
In this section, we analyze the linear convergence of SVRG-BB (Algorithm 1) for solving (1) with
strongly convex objective F (x), and as a by-product, our analysis also proves the linear convergence
of SVRG-I. The proofs in this section are provided in the supplementary materials. The following
assumption is made throughout this section.
Assumption 1. We assume that (3) holds for any xt . We assume that the objective function F (x) is
?-strongly convex, i.e.,
F (y) ? F (x) + ?F (x)> (y ? x) +
?
kx ? yk22 ,
2
?x, y ? Rd .
We also assume that the gradient of each component function fi (x) is L-Lipschitz continuous, i.e.,
k?fi (x) ? ?fi (y)k2 ? Lkx ? yk2 , ?x, y ? Rd .
Under this assumption, it is easy to see that ?F (x) is also L-Lipschitz continuous.
We first provide the following lemma, which reveals the relationship between the distances of two
consecutive iterates to the optimal point.
Lemma 1. Define
m
?k := (1 ? 2?k ?(1 ? ?k L)) +
4?k L2
.
?(1 ? ?k L)
(8)
For both SVRG-I and SVRG-BB, we have the following inequality for the k-th epoch:
2
E k?
xk+1 ? x? k2 < ?k k?
xk ? x? k22 ,
where x? is the optimal solution to (1).
The linear convergence of SVRG-I follows immediately.
Corollary 1. In SVRG-I, if m and ? are chosen such that
m
? := (1 ? 2??(1 ? ?L)) +
4?L2
< 1,
?(1 ? ?L)
(9)
then SVRG-I converges linearly in expectation:
2
E k?
xk ? x? k2 < ?k k?
x0 ? x? k22 .
Remark 2. We now give some remarks on this convergence result.
(i) To the best of our knowledge, this is the first time that the linear convergence of SVRG-I is
established.
(ii) The convergence result given in Corollary 1 is for the iterates x
?k , while the one given in [10] is
for the objective function values F (?
xk ).
The following theorem establishes the linear convergence of SVRG-BB (Algorithm 1).
Theorem 1. Denote ? = (1 ? e?2?/L )/2. Note that ? ? (0, 1/2). In SVRG-BB, if m is chosen such
that
2
4L2
L
m > max
,
+
,
(10)
log(1 ? 2?) + 2?/L ??2
?
then SVRG-BB (Algorithm 1) converges linearly in expectation:
2
E k?
xk ? x? k2 < (1 ? ?)k k?
x0 ? x? k22 .
4
4
Barzilai-Borwein Step Size for SGD
In this section, we propose to incorporate the BB method to SGD (2). The BB method does not
apply to SGD directly, because SGD never computes the full gradient ?F (x). One may suggest
to use ?fit+1 (xt+1 ) ? ?fit (xt ) to estimate ?F (xt+1 ) ? ?F (xt ) when computing the BB step
size using formula (6). However, this approach does not work well because of the variance of the
stochastic gradients. The recent work by Sopy?a and Drozda [18] suggested several variants of this
idea to compute an estimated BB step size using the stochastic gradients. However, these ideas lack
theoretical justifications and the numerical results in [18] show that these approaches are inferior to
some existing methods.
The SGD-BB algorithm we propose in this paper works in the following manner. We call every
m iterations of SGD as one epoch. Following the idea of SVRG-BB, SGD-BB also uses the same
step size computed by the BB formula in every epoch. Our SGD-BB algorithm is described as in
Algorithm 2.
Algorithm 2 SGD with BB step size (SGD-BB)
Parameters: update frequency m, initial step sizes ?0 and ?1 (only used in the first two epochs),
weighting parameter ? ? (0, 1), initial point x
?0
for k = 0, 1, ? ? ? do
if k > 0 then
1
?k = m
? k?
xk ? x
?k?1 k22 /|(?
xk ? x
?k?1 )> (?
gk ? g?k?1 )|
end if
x0 = x
?k
g?k+1 = 0
for t = 0, ? ? ? , m ? 1 do
Randomly pick it ? {1, . . . , n}
xt+1 = xt ? ?k ?fit (xt )
(?)
g?k+1 = ??fit (xt ) + (1 ? ?)?
gk+1
end for
x
?k+1 = xm
end for
Remark 3. We have a few remarks about SGD-BB (Algorithm 2).
(i) SGD-BB takes a convex combination of the m stochastic gradients in one epoch as an estimation
of the full gradient with parameter ?. The performance of SGD-BB on different problems is not
sensitive to the choice of ?. For example, setting ? = 10/m worked well for all test problems in our
experiments.
(ii) Note that for computing ?k in Algorithm 2, we actually take the absolute value for the BB formula
(6). This is because that unlike SVRG-BB, g?k in Algorithm 2 is not an exact full gradient. As a
result, the step size generated by (6) can be negative. This can be seen from the following argument.
Consider a simple case in which ? = 1/m, approximately we have
g?k =
m?1
1 X
?fit (xt ).
m t=0
(11)
It is easy to see that x
?k ? x
?k?1 = ?m?k?1 g?k . By substituting this equality into the equation for
computing ?k in Algorithm 2, we have
?k =(1/m) ? k?
xk ? x
?k?1 k2 /|(?
xk ? x
?k?1 )> (?
gk ? g?k?1 )|
?k?1
.
=
1 ? g?> g?k?1 /k?
gk k2
k
(12)
2
Without taking the absolute value, the denominator of (12) is g?k> g?k?1 /k?
gk k22 ? 1, which is usually
negative in stochastic settings.
(iii) Moreover, from (12) we have the following observations. If g?k> g?k?1 < 0, then ?k is smaller than
?k?1 . This is reasonable because g?k> g?k?1 < 0 indicates that the step size is too large and we need to
shrink it. If g?k> g?k?1 > 0, then it indicates that we should be more aggressive to take larger step size.
Hence, the way we compute ?k in Algorithm 2 is in a sense to dynamically adjust the step size, by
5
evaluating whether we are moving the iterates along the right direction. This kind of idea can be
traced back to [11].
Note that SGD-BB requires the averaged gradients in two epochs to compute the BB step size.
Therefore, we need to specify the step sizes ?0 and ?1 for the first two epochs. From our numerical
experiments, we found that the performance of SGD-BB is not sensitive to choices of ?0 and ?1 .
4.1
Smoothing Technique for the Step Sizes
Due to the randomness of the stochastic gradients, the step size computed in SGD-BB may vibrate
drastically sometimes and this may cause instability of the algorithm. Inspired by [13], we propose
the following smoothing technique to stabilize the step size.
It is known that in order to guarantee the convergence of SGD, the step sizes are required to be
diminishing. Similar as in [13], we assume that the step sizes are in the form of C/?(k), where C > 0
is an unknown constant that needs to be estimated, ?(k) is a pre-specified function that controls the
decreasing rate of the step size, and a typical choice of function ? is ?(k) = k + 1. In the k-th epoch
of Algorithm 2, we have all the previous step sizes ?2 , ?3 , . . . , ?k generated by the BB method, while
the step sizes generated by the function C/?(k) are given by C/?(2), C/?(3), . . . , C/?(k). In order
to ensure that these two sets of step sizes are close to each other, we solve the following optimization
problem to determine the unknown parameter C:
2
k
X
C
?
Ck := argmin
log
? log ?j .
(13)
?(j)
C
j=2
Here we take the logarithms of the step sizes to ensure that the estimation is not dominated by
those ?j ?s with large magnitudes. It is easy to verify that the solution to (13) is given by C?k =
Qk
1/(k?1)
. Therefore, the smoothed step size for the k-th epoch of Algorithm 2 is:
j=2 [?j ?(j)]
??k = C?k /?(k) =
k
Y
1/(k?1)
[?j ?(j)]
/?(k).
(14)
j=2
That is, we replace the ?k in equation (?) of Algorithm 2 by ??k in (14). In practice, we do not need
(k?2)/(k?1)
1/(k?1)
to store all the ?j ?s and C?k can be computed recursively by C?k = C?k?1
? [?k ?(k)]
.
4.2
Incorporating BB Step Size to SGD Variants
The BB step size and the smoothing technique we used in SGD-BB (Algorithm 2) can also be used
in other variants of SGD, because these methods only require the gradient estimations, which are
accessible in all SGD variants. For example, when replacing the stochastic gradient in Algorithm 2
by the averaged gradients in SAG method, we obtain SAG with BB step size (denoted as SAG-BB).
Because SAG does not need diminishing step sizes to ensure convergence, in the smoothing technique
we just choose ?(k) ? 1. The details of SAG-BB are given in the supplementary material.
5
Numerical Experiments
In this section, we conduct numerical experiments to demonstrate the efficacy of our SVRG-BB
(Algorithm 1) and SGD-BB (Algorithm 2) algorithms. In particular, we apply SVRG-BB and SGDBB to solve two standard testing problems in machine learning: logistic regression with `2 -norm
regularization (LR), and the squared hinge loss SVM with `2 -norm regularization (SVM):
(LR )
min F (x) =
x
n
?
1X
2
log 1 + exp(?bi a>
i x) + kxk2 ,
n i=1
2
(15)
n
(SVM )
min F (x) =
x
2 ?
1X
[1 ? bi a>
+ kxk22 ,
i x]+
n i=1
2
6
(16)
where ai ? Rd and bi ? {?1} are the feature vector and class label of the i-th sample, respectively,
and ? > 0 is a weighting parameter.
We tested SVRG-BB and SGD-BB on three standard real data sets, which were downloaded from the
LIBSVM website1 . Detailed information of the data sets are given in Table 1.
Table 1: Data and model information of the experiments
Dataset
n
d
model
?
rcv1.binary 20,242 47,236
LR
10?5
w8a
49,749
300
LR
10?4
ijcnn1
49,990
22
SVM 10?4
5.1
Numerical Results of SVRG-BB
(a) Sub-optimality on rcv1.binary
(b) Sub-optimality on w8a
(c) Sub-optimality on ijcnn1
(d) Step sizes on rcv1.binary
(e) Step sizes on w8a
(f) Step sizes on ijcnn1
Figure 1: Comparison of SVRG-BB and SVRG with fixed step sizes on different problems. The
dashed lines stand for SVRG with different fixed step sizes ?k given in the legend. The solid
lines stand for SVRG-BB with different ?0 ; for example, the solid lines in sub-figures (a) and (d)
correspond to SVRG-BB with ?0 = 10, 1, 0.1, respectively.
In this section, we compare SVRG-BB (Algorithm 1) and SVRG with fixed step size for solving
(15) and (16). We used the best-tuned step size for SVRG, and three different initial step sizes ?0 for
SVRG-BB. For both SVRG-BB and SVRG, we set m = 2n as suggested in [10].
The comparison results of SVRG-BB and SVRG are shown in Figure 1. In all sub-figures, the x-axis
denotes the number of epochs k, i.e., the number of outer loops in Algorithm 1. In Figures 1(a), 1(b)
and 1(c), the y-axis denotes the sub-optimality F (?
xk ) ? F (x? ), and in Figures 1(d), 1(e) and 1(f), the
y-axis denotes the corresponding step sizes ?k . x? is obtained by running SVRG with the best-tuned
step size until it converges. In all sub-figures, the dashed lines correspond to SVRG with fixed step
sizes given in the legends of the figures. Moreover, the dashed lines in black color always represent
SVRG with best-tuned step size, and the green and red lines use a relatively larger and smaller fixed
step sizes respectively. The solid lines correspond to SVRG-BB with different initial step sizes ?0 .
It can be seen from Figures 1(a), 1(b) and 1(c) that, SVRG-BB can always achieve the same level
of sub-optimality as SVRG with the best-tuned step size. Although SVRG-BB needs slightly more
epochs compared with SVRG with the best-tuned step size, it clearly outperforms SVRG with the
1
www.csie.ntu.edu.tw/~cjlin/libsvmtools/.
7
other two choices of step sizes. Moreover, from Figures 1(d), 1(e) and 1(f) we see that the step sizes
computed by SVRG-BB converge to the best-tuned step sizes after about 10 to 15 epochs. From
Figure 1 we also see that SVRG-BB is not sensitive to the choice of ?0 . Therefore, SVRG-BB has
very promising potential in practice because it generates the best step sizes automatically while
running the algorithm.
5.2
Numerical Results of SGD-BB
(a) Sub-optimality on rcv1.binary
(b) Sub-optimality on w8a
(c) Sub-optimality on ijcnn1
(d) Step sizes on rcv1.binary
(e) Step sizes on w8a
(f) Step sizes on ijcnn1
Figure 2: Comparison of SGD-BB and SGD. The dashed lines correspond to SGD with diminishing
step sizes in the form ?/(k + 1) with different constants ?. The solid lines stand for SGD-BB with
different initial step sizes ?0 .
In this section, we compare SGD-BB with smoothing technique (Algorithm 2) with SGD for solving
(15) and (16). We set m = n, ? = 10/m and ?1 = ?0 in our experiments. We used ?(k) = k + 1
when applying the smoothing technique. Since SGD requires diminishing step size to converge,
we tested SGD with diminishing step size in the form ?/(k + 1) with different constants ?. The
comparison results are shown in Figure 2. Similar as Figure 1, the dashed line with black color
represents SGD with the best-tuned ?, and the green and red dashed lines correspond to the other two
choices of ?; the solid lines represent SGD-BB with different ?0 .
From Figures 2(a), 2(b) and 2(c) we can see that SGD-BB gives comparable or even better suboptimality than SGD with best-tuned diminishing step size, and SGD-BB is significantly better than
SGD with the other two choices of step size. From Figures 2(d), 2(e) and 2(f) we see that after only a
few epochs, the step sizes generated by SGD-BB approximately coincide with the best-tuned ones. It
can also be seen that after only a few epochs, the step sizes are stabilized by the smoothing technique
and they approximately follow the same decreasing trend as the best-tuned diminishing step sizes.
5.3
Comparison with Other Methods
We also compared our algorithms with many existing related methods. Experimental results also
demonstrated the superiority of our methods. The results are given in the supplementary materials.
Acknowledgements
Research of Shiqian Ma was supported in part by the Hong Kong Research Grants Council General
Research Fund (Grant 14205314). Research of Yu-Hong Dai was supported by the Chinese NSF
(Nos. 11631013 and 11331012) and the National 973 Program of China (No. 2015CB856000).
8
References
[1] R. Babanezhad, M. O. Ahmed, A. Virani, M. Schmidt, K. Kone?cn`y, and S. Sallinen. Stop
wasting my gradients: Practical SVRG. In Advances in Neural Information Processing Systems,
pages 2242?2250, 2015.
[2] J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA Journal of Numerical
Analysis, 8(1):141?148, 1988.
[3] Y.-H. Dai. A new analysis on the Barzilai-Borwein gradient method. Journal of Operations
Research Society of China, 1(2):187?198, 2013.
[4] Y.-H. Dai and R. Fletcher. Projected Barzilai-Borwein methods for large-scale box-constrained
quadratic programming. Numerische Mathematik, 100(1):21?47, 2005.
[5] Y.-H. Dai, W. W. Hager, K. Schittkowski, and H. Zhang. The cyclic Barzilai-Borwein method
for unconstrained optimization. IMA Journal of Numerical Analysis, 26(3):604?627, 2006.
[6] Y.-H. Dai and L. Liao. R-linear convergence of the Barzilai and Borwein gradient method. IMA
Journal of Numerical Analysis, 22:1?10, 2002.
[7] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with
support for non-strongly convex composite objectives. In Advances in Neural Information
Processing Systems, pages 1646?1654, 2014.
[8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[9] R. Fletcher. On the Barzilai-Borwein method. In Optimization and control with applications,
pages 235?256. Springer, 2005.
[10] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[11] H. Kesten. Accelerated stochastic approximation. The Annals of Mathematical Statistics,
29(1):41?59, 1958.
[12] M. Mahsereci and P. Hennig. Probabilistic line searches for stochastic optimization. arXiv
preprint arXiv:1502.02846, 2015.
[13] P. Y. Mass? and Y. Ollivier. Speed learning on the fly. arXiv preprint arXiv:1511.02540, 2015.
[14] M. Raydan. On the Barzilai and Borwein choice of steplength for the gradient method. IMA
Journal of Numerical Analysis, 13(3):321?326, 1993.
[15] M. Raydan. The Barzilai and Borwein gradient method for the large scale unconstrained
minimization problem. SIAM Journal on Optimization, 7(1):26?33, 1997.
[16] R. L. Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential
convergence rate for finite training sets. In Advances in Neural Information Processing Systems,
pages 2663?2671, 2012.
[17] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. Jornal of Machine Learning Research, 14:567?599, 2013.
[18] K. Sopy?a and P. Drozda. Stochastic gradient descent with Barzilai-Borwein update step for
svm. Information Sciences, 316:218?233, 2015.
[19] Y. Wang and S. Ma. Projected Barzilai-Borwein methods for large scale nonnegative image
restorations. Inverse Problems in Science and Engineering, 15(6):559?583, 2007.
[20] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang. A fast algorithm for sparse reconstruction based on
shrinkage, subspace optimization, and continuation. SIAM J. SCI. COMPUT, 32(4):1832?1857,
2010.
[21] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo. Sparse reconstruction by separable
approximation. IEEE Transactions on Signal Processing, 57(7):2479?2493, 2009.
9
| 6286 |@word kong:4 briefly:1 norm:2 pick:2 sgd:66 solid:5 recursively:1 hager:1 reduction:2 initial:7 cyclic:1 efficacy:1 tuned:13 past:1 existing:2 outperforms:1 current:1 must:1 numerical:15 update:4 fund:1 selected:1 xk:14 lr:4 iterates:3 zhang:4 mathematical:1 along:2 prove:5 manner:1 introduce:1 x0:5 frequently:1 inspired:1 decreasing:2 automatically:3 spain:1 provided:2 moreover:3 mass:1 kind:1 argmin:1 minimizes:1 emerging:1 wasting:1 guarantee:1 every:3 sag:7 k2:6 control:2 grant:2 superiority:1 engineering:1 approximately:3 black:3 therein:1 china:3 dynamically:1 suggests:3 challenging:1 jornal:1 bi:3 averaged:2 practical:1 testing:1 practice:6 secant:3 sdca:1 significantly:1 composite:1 pre:2 suggest:1 get:1 cannot:1 close:1 applying:1 instability:1 www:1 deterministic:1 demonstrated:2 missing:2 yt:4 convex:11 numerische:1 roux:2 immediately:1 qian:1 fik:1 estimator:1 coordinate:2 justification:1 updated:1 annals:1 tan:1 barzilai:17 suppose:1 user:1 exact:1 programming:1 us:1 designing:1 trend:1 updating:1 observed:1 csie:1 preprint:2 fly:1 wang:1 complexity:1 solving:9 predictive:1 fast:2 shalev:1 whose:1 supplementary:3 solve:4 larger:2 compressed:1 statistic:1 online:2 differentiable:2 propose:8 reconstruction:3 product:4 loop:2 achieve:1 academy:1 convergence:21 generating:1 incremental:1 converges:4 sallinen:1 ac:1 virani:1 implemented:1 direction:1 stochastic:27 libsvmtools:1 material:3 require:1 generalization:1 alleviate:2 ntu:1 hold:2 wright:1 prohibited:2 exp:1 babanezhad:1 fletcher:2 substituting:1 steplength:1 major:1 consecutive:1 estimation:4 label:1 sensitive:4 council:1 successfully:1 establishes:1 minimization:3 clearly:1 gaussian:1 always:3 ck:1 shrinkage:1 corollary:2 raydan:2 indicates:2 hk:3 sense:1 accumulated:1 bt:7 diminishing:9 quasi:2 interested:1 issue:2 dual:2 denoted:1 smoothing:8 constrained:1 equal:1 never:1 sampling:1 represents:1 yu:2 minimized:1 few:4 wen:1 randomly:3 national:1 ima:4 n1:1 evaluation:1 adjust:1 kone:1 behind:1 nowak:1 conduct:3 iv:1 logarithm:1 theoretical:1 website1:1 restoration:1 cost:3 successful:1 johnson:1 too:1 connect:1 my:1 st:3 siam:2 accessible:1 probabilistic:1 borwein:17 squared:1 choose:4 shiqian:2 aggressive:1 potential:1 summarized:1 stabilize:1 satisfy:1 depends:2 root:1 analyze:1 hazan:1 red:2 option:5 contribution:1 square:1 variance:9 qk:1 correspond:5 cc:1 randomness:1 frequency:2 naturally:1 proof:2 sampled:1 stop:1 dataset:1 popular:1 knowledge:1 color:2 organized:1 actually:1 back:1 appears:1 focusing:1 dyh:1 follow:1 hku:1 specify:1 evaluated:1 box:2 strongly:9 shrink:1 just:1 until:1 hand:1 replacing:1 nonlinear:1 lack:1 logistic:2 k22:8 requiring:1 unbiased:2 verify:1 equality:1 hence:1 regularization:2 goldfarb:1 inferior:1 noted:1 suboptimality:1 hong:6 demonstrate:2 duchi:1 image:2 fi:7 recently:1 common:1 superior:2 numerically:1 refer:1 ai:1 rd:4 unconstrained:3 pointed:1 moving:1 yk2:1 lkx:1 recent:2 store:1 inequality:1 binary:5 seen:3 dai:6 converge:3 determine:1 cuhk:2 signal:1 dashed:6 ii:8 full:5 reduces:1 smooth:1 ahmed:1 bach:2 divided:1 impact:1 variant:13 regression:2 denominator:1 liao:1 expectation:2 arxiv:4 iteration:9 sometimes:3 represent:2 want:1 addressed:1 rest:1 unlike:1 ascent:2 legend:2 call:1 integer:1 yk22:1 iii:3 easy:3 iterate:2 fit:12 reduce:1 regarding:1 cn:2 idea:5 inner:1 whether:1 motivated:1 defazio:1 accelerating:1 kesten:1 hessian:1 cause:1 remark:6 se:2 detailed:1 tune:1 reduced:3 generate:1 continuation:1 nsf:1 stabilized:1 notice:1 estimated:3 arising:1 per:1 hennig:1 key:1 traced:1 neither:1 libsvm:1 lacoste:1 ollivier:1 subgradient:1 sum:1 beijing:1 inverse:1 named:1 throughout:2 almost:1 decide:1 reader:1 reasonable:1 utilizes:1 comparable:3 fold:1 replaces:1 quadratic:1 nonnegative:1 worked:1 dominated:1 lsec:1 generates:1 speed:1 argument:1 min:4 extremely:1 optimality:8 performing:1 rcv1:5 separable:1 relatively:1 according:1 combination:1 smaller:2 slightly:1 tw:1 ijcnn1:5 computationally:1 equation:5 mathematik:1 discus:1 cjlin:1 singer:1 prefixed:1 end:6 operation:1 apply:3 appropriate:2 batching:1 schmidt:2 original:1 denotes:4 running:5 include:1 ensure:3 hinge:1 newton:2 chinese:4 prof:1 classical:1 society:1 objective:5 added:1 traditional:1 gradient:43 subspace:1 distance:1 sci:1 outer:1 kst:1 relationship:1 difficult:1 gk:9 negative:2 unknown:4 observation:1 w8a:5 finite:2 descent:7 incorporated:1 smoothed:1 required:1 specified:2 established:1 barcelona:1 nip:1 suggested:2 usually:5 xm:3 vibrate:1 program:1 max:1 green:2 regularized:1 residual:1 advanced:2 improve:2 kxk22:1 julien:1 axis:3 epoch:17 literature:3 l2:3 acknowledgement:1 adagrad:1 fully:1 loss:2 sublinear:1 proven:2 downloaded:1 supported:2 svrg:88 figueiredo:1 drastically:1 taking:1 absolute:2 sparse:3 evaluating:1 stand:3 computes:2 made:1 adaptive:1 coincide:1 projected:2 employing:1 transaction:1 bb:103 reveals:1 assumed:1 consuming:2 shwartz:1 search:6 continuous:2 table:2 promising:1 main:2 dense:1 linearly:4 motivation:1 mahsereci:1 sub:12 saga:2 exponential:1 comput:1 kxk2:1 weighting:2 formula:4 theorem:2 xt:36 sensing:1 svm:6 burden:1 incorporating:1 magnitude:2 demand:1 kx:1 yin:1 scalar:2 springer:1 ma:3 lipschitz:3 replace:1 typical:3 lemma:2 total:1 called:1 experimental:1 people:1 support:1 latter:1 accelerated:1 incorporate:2 tested:2 |
5,843 | 6,287 | Pairwise Choice Markov Chains
Stephen Ragain
Management Science & Engineering
Stanford University
Stanford, CA 94305
[email protected]
Johan Ugander
Management Science & Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
As datasets capturing human choices grow in richness and scale?particularly in
online domains?there is an increasing need for choice models that escape traditional choice-theoretic axioms such as regularity, stochastic transitivity, and
Luce?s choice axiom. In this work we introduce the Pairwise Choice Markov
Chain (PCMC) model of discrete choice, an inferentially tractable model that does
not assume any of the above axioms while still satisfying the foundational axiom
of uniform expansion, a considerably weaker assumption than Luce?s choice axiom. We show that the PCMC model significantly outperforms both the Multinomial Logit (MNL) model and a mixed MNL (MMNL) model in prediction tasks
on both synthetic and empirical datasets known to exhibit violations of Luce?s
axiom. Our analysis also synthesizes several recent observations connecting the
Multinomial Logit model and Markov chains; the PCMC model retains the Multinomial Logit model as a special case.
1
Introduction
Discrete choice models describe and predict decisions between distinct alternatives. Traditional applications include consumer purchasing decisions, choices of schooling or employment, and commuter choices for modes of transportation among available options. Early models of probabilistic
discrete choice, including the well known Thurstone Case V model [27] and Bradley-Terry-Luce
(BTL) model [7], were developed and refined under diverse strict assumptions about human decision making. As complex individual choices become increasingly mediated by engineered and
learned platforms?from online shopping to web browser clicking to interactions with recommendation systems?there is a pressing need for flexible models capable of describing and predicting
nuanced choice behavior.
Luce?s choice axiom, popularly known as the independence of irrelevant alternatives (IIA), is arguably the most storied assumption in choice theory [18]. The axiom consists of two statements, applied to each subset of alternatives S within a broader universe U . Let paS = Pr(a chosen from S)
for any S ? U , and in a slight abuse of notation let pab = Pr(a chosen from {a, b}) when there are
only two elements. Luce?s axiom is then that: (i) if pab = 0 then paS = 0 for all S containing a and
b, (ii) the probability of choosing a from U conditioned on the choice lying in S is equal to paS .
The BTL model, which defines pab = ?a /(?a + ?b ) for latent ?quality? parameters ?i > 0, satisfies
the axiom while Thurstone?s Case V model does not [1]. Soon after its introduction, the BTL model
was generalized from pairwise choices to choices from larger sets [4]. The resulting Multinomal
Logit (MNL) model again employs quality parameters ?i ? 0 for each i ? U and defines piS , the
probability of choosing i from S ? U , proportional to ?i for all i ? S. Any model that satisfies
Luce?s choice axiom is equivalent to some MNL model [19].
One consequence of Luce?s choice axiom is strict stochastic transitivity between alternatives: if
pab ? 0.5 and pbc ? 0.5, then pac ? max(pab , pbc ). A possibly undesirable consequence of
strict stochastic transitivity is the necessity of a total order across all elements. But note that strict
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
stochastic transitivity does not imply the choice axiom; Thurstone?s model exhibits strict stochastic
transitivity.
Many choice theorists and empiricists, including Luce, have noted that the choice axiom and stochastic transitivity are strong assumptions that do not hold for empirical choice data [9, 12, 13, 26, 28].
A range of discrete choice models striving to escape the confines of the choice axiom have emerged
over the years. The most popular of these models have been Elimination by Aspects [29], mixed
MNL (MMNL) [6], and nested MNL [22]. Inference is practically difficult for all three of these
models [15, 23]. Additionally, Elimination by Aspects and the MMNL model also both exhibit the
rigid property of regularity, defined below.
A broad, important class of models in the study of discrete choice is the class of random utility
models (RUMs) [4, 20]. A RUM affiliates with each i ? U a random variable Xi and defines for each
subset S ? U the probability Pr(i chosen from S) = Pr(Xi ? Xj , ?j ? S). An independent RUM
has independent Xi . RUMs assume neither choice axiom nor stochastic transitivity. Thurstone?s
Case V model and the BTL model are both independent RUMs; the Elimination by Aspects and
MMNL models are both RUMs. A major result by McFadden and Train establishes that for any
RUM there exists a MMNL model that can approximate the choice probabilities of that RUM to
within an arbitrary error [23], a strong result about the generality of MMNL models. The nested
MNL model, meanwhile, is not a RUM.
Although RUMs need not exhibit stochastic transitivity, they still exhibit the weaker property of
regularity: for any choice sets A, B where A ? B, pxA ? pxB . Regularity may at first seem
intuitively pleasing, but it prevents models from expressing framing effects [12] and other empirical
observations from modern behavior economics [28]. This rigidity motivates us to contribute a new
model of discrete choice that escapes historically common assumptions while still furnishing enough
structure to be inferentially tractable.
The present work. In this work we introduce a conceptually simple and inferentially tractable
model of discrete choice that we call the PCMC model. The parameters of the PCMC model are
the off-diagonal entries of a rate matrix Q indexed by U . The PCMC model affiliates each subset
S of the alternatives with a continuous time Markov chain (CTMC) on S with transition rate matrix
QS , whose off-diagonal entries are entries of Q indexed by pairs of items in S. The model defines
piS , the selection probability of alternative i ? S, as the probability mass of alternative i ? S of the
stationary distribution of the CTMC on S.
The transition rates of these CTMCs can be interpreted as measures of preferences between pairs
of alternatives. Special cases of the model use pairwise choice probabilities as transition rates,
and as a result the PCMC model extends arbitrary models of pairwise choice to models of setwise choice. Indeed, we show that when the matrix Q is parameterized with the pairwise selection
probabilities of a BTL pairwise choice model, the PCMC model reduces to an MNL model. Recent
parameterizations of non-transitive pairwise probabilities such as the Blade-Chest model [8] can be
usefully employed to reduce the number of free parameters of the PCMC model.
Our PCMC model can be thought of as building upon the observation underlying the recently introduced Iterative Luce Spectral Ranking (I-LSR) procedure for efficiently finding the maximum
likelihood estimate for parameters of MNL models [21]. The analysis of I-LSR is precisely analyzing a PCMC model in the special case where the matrix Q has been parameterized by BTL. In that
case the stationary distribution of the chain is found to satisfy the stationary conditions of the MNL
likelihood function, establishing a strong connection between MNL models and Markov chains. The
PCMC model generalizes that connection.
Other recent connections between the MNL model and Markov chains include the work on RankCentrality [24], which employs a discrete time Markov chain for inference in the place of I-LSR?s
continuous time chain, in the special case where all data are pairwise comparisons.
Separate recent work has contributed a different discrete time Markov chain model of ?choice substitution? capable of approximating any RUM [3], a related problem but one with a strong focus on
ordered preferences. Lastly, recent work by Kumar et al. explores conditions under which a probability distribution over discrete items can be expressed as the stationary distribution of a discrete
time Markov chain with ?score? functions similar to the ?quality? parameters in an MNL model
[17].
2
The PCMC model is not a RUM, and in general does not exhibit stochastic transitivity, regularity,
or the choice axiom. We find that the PCMC model does, however, obey the lesser known but
fundamental axiom of uniform expansion, a weakened version of Luce?s choice axiom proposed
by Yellott that implies the choice axiom for independent RUMs [30]. In this work we define a
convenient structural property termed contractibility, for which uniform expansion is a special case,
and we show that the PCMC model exhibits contractibility. Of the models mentioned above, only
Elimination by Aspects exhibits uniform expansion without being an independent RUM. Elimination
by Aspects obeys regularity, which the PCMC model does not; as such, the PCMC model is uniquely
positioned in the literature of axiomatic discrete choice, minimally satisfying uniform expansion
without the other aforementioned axioms.
After presenting the model and its properties, we investigate choice predictions from our model on
two empirical choice datasets as well as diverse synthetic datasets. The empirical choice datasets
concern transportation choices made on commuting and shopping trips in San Francisco. Inference
on synthetic data shows that PCMC is competitive with MNL when Luce?s choice axiom holds,
while PCMC outperforms MNL when the axiom does not hold. More significantly, for both of the
empirical datasets we find that a learned PCMC model predicts empirical choices significantly better
than a learned MNL model.
2
The PCMC model
A Pairwise Choice Markov Chain (PCMC) model defines the
selection probability piS , the probability of choosing i from
S ? U , as the probability mass on alternative i ? S of
the stationary distribution of a continuous time Markov chain
(CTMC) on the set of alternatives S. The model?s parameters are the off-diagonal entries qij of rate matrix Q indexed
by pairs of elements in U . See Figure 1 for a diagram. We
impose the constraint qij + qji ? 1 for all pairs (i, j), which
ensures irreducibility of the chain for all S.
a
b
b
c
a
c
a
b
c
Figure 1: Markov chains on choice
sets {a, b}, {a, c}, and {b, c},
where line thicknesses denote transition rates. The chain on the
choice set {a, b, c} is assembled
using the same rates.
Given a query set S ?
PU , we construct QS by restricting the rows and columns of Q to elements in S
and setting qii = ? j?S\i qij for each i ? S. Let ?S = {?S (i)}i?S be the stationary distribution
P
of the corresponding CTMC on S, and let ?S (A) = x?A ?S (x). We define the choice probability
piS := ?S (i), and now show that the PCMC model is well defined.
Proposition 1. The choice probabilities piS are well defined for all i ? S, all S ? U of a finite U .
Proof. We need only to show that there is a single closed communicating class. Because S is finite,
there must be at least one closed communicating class. Suppose the chain had more than one closed
communicating class and that i ? S and j ? S were in different closed communicating classes. But
qij +qji ? 1, so at least one of qij and qji is strictly positive and the chain can switch communicating
classes through the transition with strictly positive rate, a contradiction.
While the support of ?S is the single closed communicating class, S may have transient states
corresponding to alternatives with selection probability 0. Note that irreducibility argument needs
only that qij + qji be positive, not necessarily at least 1 as imposed in the model definition. One
could simply constrain qij + qji ? for some positive . However, multiplying all entriesof Q by
some c > 0 does not affect the stationary distribution of the corresponding CTMC, so multiplication
by 1/ gives a Q with the same selection probabilities.
In the subsections that follow, we develop key properties of the model. We begin by showing how
assigning Q according a Bradley-Terry-Luce (BTL) pairwise model results in the PCMC model
being equivalent to BTL?s canonical extension, the Multinomial Logit (MNL) set-wise model. We
then construct a Q for which the PCMC model is neither regular nor a RUM.
2.1
Multinomial Logit from Bradley-Terry-Luce
We now observe that the Multinomial Logit (MNL) model, also called the Plackett-Luce model,
is precisely a PCMC model with a matrix Q consisting of pairwise BTL probabilities. Recall that
the BTL model assumes the existence of latent ?quality? parameters ?i > 0 for i ? U with pij =
?i /(?i + ?j ), ?i, j ? U and that the MNL generalization defines piS ? ?i , ?i ? S for each S ? U .
3
Proposition 2. Let ? be the parameters of a BTL model on U . For qji =
probabilities piS are consistent with an MNL model on S with parameters ?.
?i
?i +?j ,
the PCMC
?
Proof. We aim to show that ?S = ||?||
is a stationary distribution of the PCMC chain: ?ST QS = 0.
1
We have:
?
?
?
?
X
X
X
X
1 ?
?i ?
?j
?j ?
(?ST QS )i =
= 0, ?i.
?j qji ? ?i (
qji )? =
?
||?||1
||?||1
?i + ?j
?i + ?j
j6=i
j6=i
j6=i
j6=i
Thus ?S is always the stationary distribution of the chain, and we know by Proposition 1 that it is
unique. It follows that piS ? ?i for all i ? S, as desired.
Other parameterizations of Q, which can be used for parameter reduction or to extend arbitrary
models for pairwise choice, are explored section 1 of the Supplementary material.
2.2
A counterexample to regularity
The regularity property stipulates that for any S 0 ? S, the probability of selecting a from S 0 is at
least the probability of selecting a from S. All RUMs exhibit regularity because S 0 ? S implies
Pr(Xi = maxj?S 0 Xj ) ? Pr(Xi = maxj?S Xj ). We now construct a simple PCMC model which
does not exhibit regularity, and is thus not a RUM.
Consider U = {r, p, s} corresponding to a rock-paper-scissors-like stochastic game where each
pairwise matchup has the same win probability ? > 21 . Constructing a PCMC model where the
transition rate from i to j is ? if j beats i in rock-paper-scissors yields the rate matrix
#
"
?1
1??
?
?
?1
1?? .
Q=
1??
?
?1
We see that for pairs of objects, the PCMC model returns the same probabilities as the pairwise
game, i.e. pij = ? when i beats j in rock-paper-scissors, as pij = qji when qij +qji = 1. Regardless
of how the probability ? is chosen, however, it is always the case that prU = ppU = psU = 1/3. It
follows that regularity does not hold for ? > 2/3.
We view the PCMC model?s lack of regularity is a positive trait in the sense that empirical choice
phenomena such as framing effects and asymmetric dominance violate regularity [12], and the
PCMC model is rare in its ability to model such choices. Deriving necessary and sufficient conditions on Q for a PCMC model to be a RUM, analogous to known characterization theorems for
RUMs [10] and known sufficient conditions for nested MNL models to be RUMs [5], is an interesting open challenge.
3
Properties
While we have demonstrated already that the PCMC model avoids several restrictive properties that
are often inconsistent with empirical choice data, we demonstrate in this section that the PCMC
model still exhibits deep structure in the form of contractibility, which implies uniform expansion.
Inspired by a thought experiment that was posed as an early challenge to the choice axiom, we define
the property of contractibility to handle notions of similarity between elements. We demonstrate that
the PCMC model exhibits contractibility, which gracefully handles this thought experiment.
3.1
Uniform expansion
Yellott [30] introduced uniform expansion as a weaker condition than Luce?s choice axiom, but one
that implies the choice axiom in the context of any independent RUM. Yellott posed the axiom of
invariance to uniform expansion in the context of ?copies? of elements which are ?identical.? In the
context of our model, such copies would have identical transition rates to alternatives:
Definition 1 (Copies). For i, j in S ? U , we say that i and j are copies if for all k ? S ? i ? j,
qik = qjk and qij = qji .
4
Yellott?s introduction to uniform expansion asks the reader to consider an offer of a choice of beverage from k identical cups of coffee, k identical cups of tea, and k identical glasses of milk. Yellott
contends that the probability the reader chooses a type of beverage (e.g. coffee) in this scenario
should be the same as if they were only shown one cup of each beverage type, regardless of k ? 1.
Definition 2 (Uniform Expansion). Consider a choice between n elements in a set S1 =
{i11 , . . . , in1 }, and another choice from a set Sk containing k copies of each of the n elements:
Sk = {i11 , . . . , i1k , i21 , . . . , i2k , . . . , in1 , . . . , ink }. The axiom of uniform expansion states that for
each m = 1, . . . , n and all k ? 1:
pim1 S1 =
k
X
pimj Sk .
j=1
We will show that the PCMC model always exhibits a more general property of contractibility, of
which uniform expansion is a special case; it thus always exhibits uniform expansion.
Yellott showed that for any independent RUM with |U | ? 3 the double-exponential distribution
family is the only family of independent distributions that exhibit uniform expansion for all k ? 1,
and that Thurstone?s model based on the Gaussian distribution family in particular does not exhibit
uniform expansion.
While uniform expansion seems natural in many discrete choice contexts, it should be regarded with
some skepticism in applications that model competitions. Sports matches or races are often modeled
using RUMs, where the winner of a competition can be modeled as the competitor with the best draw
from their random variable. If a competitor has a performance distribution with a heavy upper tail
(so that their wins come from occasional ?good days?), uniform expansion would not hold. This
observation relates to recent work on team performance and selection [14], where non-invariance
under uniform expansion plays a key role.
3.2
Contractibility
In a book review of Luce?s early work on the choice axiom, Debreu [9] considers a hypothetical
choice between three musical recordings: one of Beethoven?s eighth symphony conducted by X,
another of Beethoven?s eighth symphony conducted by Y , and one of Debussy quartet conducted
by Z. We will call these options B1 , B2 , and D respectively. When compared to D, Debreu argues
that B1 and B2 are indistinguishable in the sense that pDB1 = pDB2 . However, someone may
prefer B1 over B2 in the sense that pB1 B2 > 0.5. This is impossible under a BTL model, in which
pDB1 = pDB2 implies that ?B1 = ?B2 and in turn pB1 B2 = 0.5.
To address contexts in which elements compare identically to alternatives but not each other (e.g. B1
and B2 ), we introduce contractible partitions that group these similar alternatives into sets. We
then show that when a PCMC model contains a contractible partition, the relative probabilities of
selecting from one of these partitions is independent from how comparisons are made between
alternatives in the same set. Our contractible partition definition can be viewed as akin to (but
distinct from) nests in nested MNL models [22].
Definition 3 (Contractible Partition). A partition of U into non-empty sets A1 , . . . , Ak is a contractible partition if qai aj = ?ij for all ai ? Ai , aj ? Aj for some ? = {?ij } for i, j ? {1, . . . , k}.
Proposition 3. For a given ?, let A1 , . . . , Ak be a contractible partition for two PCMC models on
U represented by Q, Q0 with stationary distributions ?, ? 0 . Then for any Ai :
X
X
pjU =
p0jU ,
(1)
j?Ai
j?Ai
0
or equivalently, ?(Ai ) = ? (Ai ).
Proof. Suppose Q has contractible partition A1 , . . . , Ak with respect to ?. If we decompose the
balance equations (i.e. each row of ? T Q = 0), for x ? A1 WLOG we obtain:
?
?
k
k
X
X
X
X
X
X
?(x) ?
qxy +
qxai ? =
?(y)qyx +
?(ai )qai x .
(2)
y?A1 \x
i=2 ai ?Ai
y?A1 \x
5
i=2 ai ?Ai
Noting that for ai ? Ai and aj ? Aj , qai aj = ?ij , (2) can be rewritten:
?
?
k
k
X
X
X
X
?(x) ?
qxy ? + ?(x)
|Ai |?i1 =
?(y)qyx +
?(Ai )?i1 .
y?A1 \x
i=2
y?A1 \x
Summing over x ? A1 then gives
?
?
k
X
X
X
X
?(x) ?
qxy ? + ?(A1 )
|Ai |?i1 =
x?A1
y?A1 \x
i=2
X
x?A1 y?A1 \x
The leftmost term of each side is equal, so we have
Pk
|A1 | i=2 ?(Ai )?i1
P
,
?(A1 ) =
i=2 |Ai |?1i
i=2
?(y)qyx + |A1 |
k
X
?(Ai )?i1 .
i=2
(3)
which makes ?(A1 ) the solution to global balance equations for a different continuous time Markov
chain with the states
P {A1 , . . . , Ak } and transition rate q?Ai Aj = |Aj |?ij between state Ai and Aj ,
and q?Ai Ai = ? j6=i q?Ai Aj . Now qai aj + qaj ai ? 1 implies ?ij + ?ji ? 1. Combining this
observation with |Ai | > 0 shows (as with the proof of Proposition 1) that this chain is irreducible
? is determined entirely by ? and
and thus that {?(Ai )}ki=1 are well-defined. Furthermore, because Q
0
0
?
?
|A1 |, . . . , |Ak |, we have that Q = Q , and thus that ?(Ai ) = ? (Ai ), ?i regardless of how Q and Q0
may differ, completing the proof.
The intuition is that we can ?contract? each Ai to a single ?type? because the probability of choosing
an element of Ai is independent of the pairwise probabilities between elements within the sets. The
above proposition and the contractibility of a PCMC model on all uniformly expanded sets implies
that all PCMC models exhibit uniform expansion.
Proposition 4. Any PCMC model exhibits uniform expansion.
Proof. We translate the problem of uniform expansion into the language of contractibility. Let U1
be the universe of unique items i11 , i21 , . . . , in1 , and let Uk be a universe containing k copies of
each item in U1 . Let imj denote the jth copy of the mth item in U1 . Thus Uk = ?nm=1 ?kj=1 imj .
Let Q be the transition rate matrix of the CTMC on U1 . We construct a contractible partition of Uk
into the n sets, each containing the k copies of some item in U1 . Thus Am = ?kj=1 imj . By the
definition of copies, that {Am }nm=1 is a contractible partition of Uk with ? = Q. Noting |Am | = k
for all m in Equation (3) above results in {?(Am )}nm=1 being the solution to ? T Q = ? T ? =
Pk
0. Thus pim U1 = ?(Am ) = j=1 pimj Uk for each m, showing that the model exhibits uniform
expansion.
We end this section by noting that every PCMC model has a trivial contractible partition into singletons. Detection and exploitation of Q?s non-trivial contractible partitions (or appropriately defined
?nearly contractible partitions?) are interesting open research directions.
4
Inference and prediction
Our ultimate goal in formulating this model is to make predictions: using past choices from diverse
subsets S ? U to predict future choices. In this section we first give the log-likelihood function
log L(Q; C) of the rate matrix Q given a choice data collection of the form C = {(ik , Sk )}nk=1 , where
ik ? Sk was the item chosen from Sk . We then investigate the ability of a learned PCMC model to
make choice predictions on empirical data, benchmarked against learned MNL and MMNL models,
? Let CiS (C) = |{(ik , Sk ) ? C : ik = i, Sk = S}|
and interpret the inferred model parameters Q.
denote the number of times in the data that i was chosen out of set S for each S ? U , and let
CS (C) = |{(ik , Sk ) ? C : Sk = S}| be the number of times that S was the choice set for each
S ? U.
6
4.1
Maximum likelihood
For each S ? U , i ? S, recall that piS (Q) is the probability that i is selected from set S as a function
of the rate matrix Q. After dropping all additive constants, the log-likelihood of Q given the data C
(derived from the probability mass function of the multinomial distribution) is:
XX
log L(Q; C) =
CiS (C) log(piS (Q)).
S?U i?S
Recall that for the PCMC model, piS (Q) = ?SP
(i), where ?S is the stationary distribution for a
CTMC with rate matrix QS , i.e. ?ST QS = 0 and i?S ?S (i) = 1. There is no general closed form
expression for piS (Q). The implicit definition also makes it difficult to derive gradients for log L
with respect to the parameters qij . We employ SLSQP [25] to maximize log L(Q; C), which is nonconcave in general. For more information on the optimization techniques used in this section, see
the Supplementary Materials.
4.2
Empirical data results
We evaluate our inference procedure on two empirical choice datasets, SFwork and SFshop, collected from a survey of transportation preferences around the San Francisco Bay Area [16]. The
SFshop dataset contains 3,157 observations each consisting of a choice set of transportation alternatives available to individuals traveling to and returning from a shopping center, as well as a choice
from that choice set. The SFwork dataset, meanwhile, contains 5,029 observations consisting of
commuting options and the choice made on a given commute. Basic statistics describing the choice
set sizes and the number of times each pair of alternatives appear in the same choice set appear in
the Supplementary Materials1 .
We train our model on observations Ttrain ? C and evaluate on a test set Ttest ? C via
X X
1
? train )) ? p?iS (Ttest )|,
|pjS (Q(T
Error(Ttrain ; Ttest ) =
|Ttest |
(4)
(i,S)?Ttest j?S
? train ) is the estimate for Q obtained from the observations in Ttrain and p?iS (Ttest ) =
where Q(T
CiS (Ttest )/CS (Ttest ) is the empirical probability of i was selected from S among observations in
Ttest . Note that Error(Ttrain ; Ttest ) is the expected `1 -norm of the difference between the empirical
distribution and the inferred distribution on a choice set drawn uniformly at random from the observations in Ttest . We applied small amounts of additive smoothing to each dataset.
We compare our PCMC model against both an MNL model trained using Iterative Luce Spectral
Ranking (I-LSR) [21] and a more flexible MMNL model. We used a discrete mixture of k MNL
models (with O(kn) parameters), choosing k so that the MMNL model had strictly more parameters
than the PCMC model on each data set. For details on how the MMNL model was trained, see the
Supplementary Materials.
Figure 2 shows Error(Ttrain ; Ttest ) on the SFwork data as the learning procedure is applied to increasing amounts of data. The results are averaged over 1,000 different permutations of the data
with a 75/25 train/test split employed for each permutation. We show the error on the testing data as
we train with increasing proportions of the training data. A similar figure for SFshop data appears
in the Supplementary Materials.
We see that our model is better equipped to learn from and make predictions in both datasets, and
when using all of the training data, we observe an error reduction of 36.2% and 46.5% compared to
MNL and 24.4% and 31.7% compared to MMNL on SFwork and SFshop respectively.
? for the SFwork data, showing the relative rates
Figure 2 also gives two different heat maps of Q
q?ij /?
qji between pairs of items as well as how the total rate q?ij + q?ji between pairs compares to
total rates between other pairs. The index ordering of each matrix follows the estimated selection
probabilities of the PCMC model on the full set of the alternatives for that dataset. The ordered
options for SFwork are: (1) driving alone, (2) sharing a ride with one other person, (3) walking,
1
Data and code available here: https://github.com/sragain/pcmc-nips
7
Figure 2: Prediction error on a 25% holdout of the SFwork data for the PCMC, MNL, and MMNL
models. PCMC sees improvements of 35.9% and 24.5% in prediction error over MNL and MMNL,
respectively, when training on 75% of the data.
(4) public transit, (5) biking, and (6) carpooling with at least two others. Numerical values for the
? for both datasets appear in the Supplementary Materials.
entries of Q
The inferred pairwise selection probabilities are p?ij = q?ji /(?
qji + q?ij ). Constructing a tournament
graph on the alternatives where (i, j) ? E if p?ij ? 0.5, cyclic triplets are then length-3 cycles in the
tournament. A bound due to Harary and Moser [11] establishes that the maximum number of cyclic
triples on a tournament graph on n nodes is 8 when n = 6 and 20 when n = 8. According to our
learned model, the choices exhibit 2 out of a maximum 8 cyclic triplets in the SFwork data and 6
out of a maximum 20 cyclic triplets for the SFshop data.
Additional evaluations of predictive performance across a range of synthetic datasets appear in the
Supplementary Materials. The majority of datasets in the literature on discrete choice focus on
pairwise comparisons or ranked lists, where lists inherently assume transitivity and the independence
of irrelevant alternatives. The SFwork and SFshop datasets are rare examples of public datasets
that genuinely study choices from sets larger than pairs.
5
Conclusion
We introduce a Pairwise Choice Markov Chain (PCMC) model of discrete choice which defines
selection probabilities according to the stationary distributions of continuous time Markov chains
on alternatives. The model parameters are the transition rates between pairs of alternatives.
In general the PCMC model is not a random utility model (RUM), and maintains broad flexibility
by eschewing the implications of Luce?s choice axiom, stochastic transitivity, and regularity. Despite this flexibility, we demonstrate that the PCMC model exhibits desirable structure by fulfilling
uniform expansion, a property previously found only in the Multinomial Logit (MNL) model and
the intractable Elimination by Aspects model.
We also introduce the notion of contractibility, a property motivated by thought experiments instrumental in moving choice theory beyond the choice axiom, for which Yellott?s axiom of uniform
expansion is a special case. Our work demonstrates that the PCMC model exhibits contractibility,
which implies uniform expansion. We also showed that the PCMC model offers straightforward inference through maximum likelihood estimation, and that a learned PCMC model predicts empirical
choice data with a significantly higher fidelity than both MNL and MMNL models.
The flexibility and tractability of the PCMC model opens up many compelling research directions.
First, what necessary and sufficient conditions on the matrix Q guarantee that a PCMC model is a
RUM [10]? The efficacy of the PCMC model suggests exploring other effective parameterizations
for Q, including developing inferential methods which exploit contractibility. There are also open
computational questions, such as streamlining the likelihood maximization using gradients of the
implicit function definitions. Very recently, learning results for nested MNL models have shown
favorable query complexity under an oracle model [2], and a comparison of our PCMC model with
these approaches to learning nested MNL models is important future work.
Acknowledgements. This work was supported in part by a David Morgenthaler II Faculty Fellowship and a Dantzig?Lieberman Operations Research Fellowship.
8
References
[1] E. Adams and S. Messick. An axiomatic formulation and generalization of successive intervals scaling.
Psychometrika, 23(4):355?368, 1958.
[2] A. R. Benson, R. Kumar, and A. Tomkins. On the relevance of irrelevant alternatives. In WWW, 2016.
[3] J. Blanchet, G. Gallego, and V. Goyal. A markov chain approximation to choice modeling. In EC, 2013.
[4] H. D. Block and J. Marschak. Random orderings and stochastic theories of responses. Contributions to
Probability and Statistics, 2:97?132, 1960.
[5] A. B?orsch-Supan. On the compatibility of nested logit models with utility maximization. Journal of
Econometrics, 43(3):373?388, 1990.
[6] J H Boyd and R E Mellman. The effect of fuel economy standards on the us automotive market: an
hedonic demand analysis. Transportation Research Part A: General, 14(5-6):367?378, 1980.
[7] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs the method of paired comparisons. Biometrika, 39(3-4):324?345, 1952.
[8] S. Chen and T. Joachims. Modeling intransitivity in matchup and comparison data. In WSDM, 2016.
[9] G. Debreu. Review of individual choice behavior: A theoretical analysis. American Economic Review,
1960.
[10] J.-C. Falmagne. A representation theorem for finite random scale systems. J. Math. Psych., 18(1):52?72,
1978.
[11] Frank Harary and Leo Moser. The theory of round robin tournaments. The American Mathematical
Monthly, 73(3):231?246, 1966.
[12] J. Huber, J. W. Payne, and C. Puto. Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of Consumer Research, pages 90?98, 1982.
[13] S. Ieong, N. Mishra, and O. Sheffet. Predicting preference flips in commerce search. In ICML, 2012.
[14] J. Kleinberg and M. Raghu. Team performance with test scores. In EC, pages 511?528, 2015.
[15] R. Kohli and K. Jedidi. Error theory for elimination by aspects. Operations Research, 63(3):512?526,
2015.
[16] F. S Koppelman and C. Bhat. A self instructing course in mode choice modeling: multinomial and nested
logit models. US Department of Transportation, Federal Transit Administration, 31, 2006.
[17] Ravi Kumar, Andrew Tomkins, Sergei Vassilvitskii, and Erik Vee. Inverting a steady-state. In Proceedings
of the Eighth ACM International Conference on Web Search and Data Mining, pages 359?368. ACM,
2015.
[18] R. D. Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959.
[19] R. D. Luce. The choice axiom after twenty years. J. Math. Psych., 15(3):215?233, 1977.
[20] C. F. Manski. The structure of random utility models. Theory and Decision, 8(3):229?254, 1977.
[21] L. Maystre and M. Grossglauser. Fast and accurate inference of plackett?luce models. In NIPS, 2015.
[22] D. McFadden. Econometric models for probabilistic choice among products. Journal of Business, pages
S13?S29, 1980.
[23] D. McFadden, K. Train, et al. Mixed mnl models for discrete response. Journal of Applied Econometrics,
15(5):447?470, 2000.
[24] S. Negahban, S. Oh, and D. Shah. Rank centrality: Ranking from pair-wise comparisons. arXiv preprint
arXiv:1209.1688v4, 2015.
[25] J. Nocedal and S. J. Wright. Numerical optimization. 2006.
[26] I. Simonson and A. Tversky. Choice in context: Tradeoff contrast and extremeness aversion. Journal of
Marketing Research, 29(3):281, 1992.
[27] L. L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273, 1927.
[28] J. S. Trueblood, S. D. Brown, A. Heathcote, and J. R. Busemeyer. Not just for consumers context effects
are fundamental to decision making. Psychological Science, 24(6):901?908, 2013.
[29] A. Tversky. Elimination by aspects: A theory of choice. Psychological Review, 79(4):281, 1972.
[30] J. I. Yellott. The relationship between luce?s choice axiom, thurstone?s theory of comparative judgment,
and the double exponential distribution. J. Math. Psych., 15(2):109?144, 1977.
9
| 6287 |@word kohli:1 exploitation:1 faculty:1 version:1 seems:1 norm:1 logit:10 proportion:1 instrumental:1 open:4 sheffet:1 commute:1 asks:1 blade:1 reduction:2 necessity:1 substitution:1 contains:3 score:2 cyclic:4 selecting:3 efficacy:1 symphony:2 outperforms:2 past:1 bradley:4 mishra:1 com:1 assigning:1 must:1 sergei:1 skepticism:1 additive:2 partition:14 numerical:2 stationary:12 alone:1 selected:2 item:8 pb1:2 ugander:1 qjk:1 ctmcs:1 ttrain:5 characterization:1 parameterizations:3 contribute:1 node:1 preference:4 successive:1 math:3 mathematical:1 become:1 ik:5 qij:10 consists:1 introduce:5 pairwise:19 huber:1 slsqp:1 expected:1 market:1 indeed:1 nor:2 behavior:4 inspired:1 wsdm:1 equipped:1 increasing:3 psychometrika:1 spain:1 begin:1 notation:1 underlying:1 xx:1 mass:3 fuel:1 what:1 benchmarked:1 interpreted:1 psych:3 developed:1 finding:1 guarantee:1 every:1 hypothetical:1 usefully:1 returning:1 demonstrates:1 biometrika:1 uk:5 appear:4 arguably:1 positive:5 engineering:2 consequence:2 despite:1 ak:5 analyzing:1 establishing:1 abuse:1 tournament:4 minimally:1 weakened:1 dantzig:1 suggests:1 someone:1 qii:1 range:2 obeys:1 averaged:1 unique:2 commerce:1 busemeyer:1 testing:1 qxy:3 goyal:1 block:2 procedure:3 foundational:1 area:1 axiom:34 empirical:15 significantly:4 thought:4 convenient:1 inferential:1 boyd:1 regular:1 undesirable:1 selection:9 context:7 impossible:1 www:1 equivalent:2 imposed:1 demonstrated:1 transportation:6 center:1 map:1 straightforward:1 economics:1 regardless:3 survey:1 communicating:6 q:6 contradiction:1 regarded:1 deriving:1 oh:1 rigidity:1 handle:2 thurstone:7 notion:2 analogous:1 biking:1 setwise:1 suppose:2 play:1 qai:4 hypothesis:1 pa:3 element:11 satisfying:2 particularly:1 walking:1 genuinely:1 asymmetric:1 econometrics:2 predicts:2 role:1 preprint:1 ensures:1 cycle:1 richness:1 ordering:2 mentioned:1 intuition:1 complexity:1 rankcentrality:1 employment:1 tversky:2 trained:2 predictive:1 manski:1 upon:1 beverage:3 represented:1 simonson:1 leo:1 train:7 distinct:2 heat:1 describe:1 effective:1 eschewing:1 fast:1 query:2 i21:2 choosing:5 refined:1 whose:1 emerged:1 stanford:6 larger:2 supplementary:7 posed:2 say:1 imj:3 pab:5 ability:2 statistic:2 browser:1 online:2 pressing:1 rock:3 interaction:1 product:1 combining:1 payne:1 translate:1 flexibility:3 competition:2 regularity:15 double:2 empty:1 comparative:2 adam:1 object:1 derive:1 develop:1 andrew:1 ij:10 strong:4 c:2 pxa:1 implies:8 come:1 differ:1 direction:2 popularly:1 stochastic:12 human:2 engineered:1 transient:1 elimination:8 material:6 public:2 shopping:3 generalization:2 decompose:1 proposition:7 strictly:3 extension:1 exploring:1 hold:5 lying:1 practically:1 around:1 wright:1 predict:2 driving:1 major:1 early:3 estimation:1 favorable:1 axiomatic:2 establishes:2 federal:1 always:4 gaussian:1 aim:1 broader:1 derived:1 focus:2 joachim:1 improvement:1 rank:2 likelihood:7 contrast:1 sense:3 glass:1 am:5 inference:7 plackett:2 economy:1 rigid:1 maystre:1 mth:1 i1:5 compatibility:1 among:3 flexible:2 fidelity:1 aforementioned:1 platform:1 special:7 smoothing:1 equal:2 construct:4 psu:1 identical:5 broad:2 icml:1 nearly:1 future:2 others:1 escape:3 employ:3 irreducible:1 modern:1 pim:1 heathcote:1 individual:4 maxj:2 consisting:3 pleasing:1 detection:1 investigate:2 mining:1 evaluation:1 violation:2 mixture:1 chain:25 implication:1 accurate:1 capable:2 necessary:2 indexed:3 incomplete:1 desired:1 theoretical:2 psychological:3 column:1 modeling:3 compelling:1 lieberman:1 retains:1 maximization:2 tractability:1 subset:4 entry:5 rare:2 uniform:26 conducted:3 kn:1 thickness:1 considerably:1 synthetic:4 chooses:1 st:3 person:1 explores:1 fundamental:2 moser:2 international:1 marschak:1 negahban:1 probabilistic:2 off:3 contract:1 v4:1 connecting:1 pju:1 again:1 nm:3 management:2 containing:4 possibly:1 nest:1 inferentially:3 book:1 american:2 streamlining:1 return:1 s13:1 singleton:1 b2:7 satisfy:1 ranking:3 scissors:3 pru:1 race:1 ppu:1 view:1 closed:6 competitive:1 option:4 ttest:12 maintains:1 contractibility:12 contribution:1 musical:1 efficiently:1 yield:1 judgment:2 conceptually:1 multiplying:1 j6:5 sharing:1 definition:8 competitor:2 against:2 proof:6 pbc:2 dataset:4 holdout:1 popular:1 recall:3 subsection:1 positioned:1 appears:1 higher:1 day:1 follow:1 response:2 formulation:1 generality:1 furthermore:1 marketing:1 lsr:4 lastly:1 implicit:2 just:1 traveling:1 web:2 lack:1 defines:7 mode:2 quality:4 aj:11 nuanced:1 building:1 effect:4 brown:1 q0:2 round:1 indistinguishable:1 transitivity:11 game:2 uniquely:1 self:1 noted:1 steady:1 generalized:1 leftmost:1 presenting:1 theoretic:1 demonstrate:3 argues:1 wise:2 recently:2 common:1 commuter:1 multinomial:9 ji:3 ctmc:7 winner:1 extend:1 slight:1 tail:1 trait:1 interpret:1 expressing:1 monthly:1 theorist:1 counterexample:1 cup:3 ai:32 iia:1 language:1 had:2 ride:1 moving:1 similarity:2 pu:1 recent:6 showed:2 irrelevant:3 termed:1 scenario:1 contractible:12 additional:1 impose:1 employed:2 affiliate:2 maximize:1 stephen:1 ii:2 full:1 violate:1 relates:1 reduces:1 desirable:1 debreu:3 match:1 offer:2 a1:20 paired:1 prediction:8 basic:1 arxiv:2 qik:1 fellowship:2 bhat:1 interval:1 diagram:1 grow:1 appropriately:1 strict:5 recording:1 inconsistent:1 nonconcave:1 seem:1 s29:1 call:2 chest:1 structural:1 noting:3 split:1 enough:1 identically:1 switch:1 independence:2 xj:3 affect:1 irreducibility:2 reduce:1 economic:1 lesser:1 luce:23 tradeoff:1 administration:1 vassilvitskii:1 expression:1 motivated:1 utility:4 ultimate:1 akin:1 deep:1 amount:2 furnishing:1 http:1 canonical:1 estimated:1 diverse:3 stipulates:1 discrete:17 dropping:1 tea:1 grossglauser:1 dominance:1 key:2 group:1 drawn:1 neither:2 ravi:1 btl:12 nocedal:1 econometric:1 graph:2 year:2 parameterized:2 extends:1 place:1 reader:2 family:3 draw:1 decision:5 prefer:1 scaling:1 capturing:1 entirely:1 ki:1 completing:1 bound:1 oracle:1 precisely:2 constraint:1 constrain:1 i1k:1 automotive:1 dominated:1 kleinberg:1 aspect:8 u1:6 argument:1 formulating:1 kumar:3 expanded:1 department:1 developing:1 according:3 across:2 increasingly:1 harary:2 making:2 s1:2 benson:1 intuitively:1 pr:6 fulfilling:1 equation:3 previously:1 describing:2 turn:1 know:1 flip:1 tractable:3 end:1 raghu:1 available:3 generalizes:1 rewritten:1 operation:2 obey:1 observe:2 occasional:1 spectral:2 centrality:1 alternative:24 shah:1 existence:1 assumes:1 include:2 tomkins:2 qji:13 exploit:1 restrictive:1 gallego:1 coffee:2 approximating:1 ink:1 already:1 in1:3 question:1 traditional:2 diagonal:3 exhibit:22 gradient:2 win:2 separate:1 majority:1 gracefully:1 transit:2 considers:1 collected:1 trivial:2 consumer:3 quartet:1 code:1 length:1 modeled:2 index:1 erik:1 relationship:1 balance:2 vee:1 equivalently:1 difficult:2 statement:1 frank:1 design:1 motivates:1 twenty:1 contributed:1 upper:1 observation:11 datasets:13 markov:16 commuting:2 finite:3 beethoven:2 qyx:3 beat:2 team:2 arbitrary:3 inferred:3 introduced:2 david:1 pair:12 inverting:1 trip:1 connection:3 intransitivity:1 instructing:1 learned:7 framing:2 barcelona:1 yellott:8 nip:3 assembled:1 address:1 beyond:1 below:1 eighth:3 challenge:2 including:3 max:1 terry:4 natural:1 ranked:1 business:1 predicting:2 github:1 historically:1 imply:1 mediated:1 transitive:1 kj:2 review:5 literature:2 acknowledgement:1 multiplication:1 relative:2 law:1 permutation:2 mcfadden:3 mixed:3 interesting:2 proportional:1 triple:1 aversion:1 purchasing:1 pij:3 consistent:1 sufficient:3 blanchet:1 pi:12 heavy:1 row:2 course:1 supported:1 soon:1 free:1 copy:9 jth:1 side:1 weaker:3 hedonic:1 rum:25 empiricist:1 transition:10 avoids:1 made:3 collection:1 san:2 ec:2 approximate:1 global:1 b1:5 summing:1 francisco:2 xi:5 continuous:5 latent:2 iterative:2 triplet:3 sk:10 bay:1 morgenthaler:1 additionally:1 robin:1 learn:1 johan:1 ca:2 inherently:1 synthesizes:1 expansion:26 complex:1 meanwhile:2 necessarily:1 domain:1 constructing:2 sp:1 mnl:33 pk:2 universe:3 wiley:1 wlog:1 exponential:2 clicking:1 theorem:2 pac:1 showing:3 explored:1 striving:1 list:2 concern:1 exists:1 intractable:1 restricting:1 i11:3 adding:1 milk:1 ci:3 conditioned:1 demand:1 nk:1 chen:1 ieong:1 simply:1 prevents:1 expressed:1 ordered:2 sport:1 recommendation:1 nested:8 satisfies:2 acm:2 viewed:1 goal:1 determined:1 uniformly:2 total:3 called:1 asymmetrically:1 invariance:2 search:2 extremeness:1 support:1 confines:1 relevance:1 evaluate:2 phenomenon:1 |
5,844 | 6,288 | Split LBI: An Iterative Regularization Path with
Structural Sparsity
Chendi Huang1 , Xinwei Sun1 , Jiechao Xiong1 , Yuan Yao2,1
Peking University, 2 Hong Kong University of Science and Technology
{cdhuang, sxwxiaoxiaohehe, xiongjiechao}@pku.edu.cn, [email protected]
1
Abstract
An iterative regularization path with structural sparsity is proposed in this paper
based on variable splitting and the Linearized Bregman Iteration, hence called Split
LBI. Despite its simplicity, Split LBI outperforms the popular generalized Lasso
in both theory and experiments. A theory of path consistency is presented that
equipped with a proper early stopping, Split LBI may achieve model selection
consistency under a family of Irrepresentable Conditions which can be weaker than
the necessary and sufficient condition for generalized Lasso. Furthermore, some `2
error bounds are also given at the minimax optimal rates. The utility and benefit of
the algorithm are illustrated by applications on both traditional image denoising
and a novel example on partial order ranking.
1
Introduction
In this paper, consider the recovery from linear noisy measurements of ? ? ? Rp , which satisfies the
following structural sparsity that the linear transformation ? ? := D? ? for some D ? Rm?p has most
of its elements being zeros. For a design matrix X ? Rn?p , let
y = X? ? + , ? ? = D? ? (S = supp (? ? ) , s = |S|) ,
(1.1)
n
where ? R has independent identically distributed components, each of which has a sub-Gaussian
distribution with parameter ? 2 (E[exp(ti )] ? exp(? 2 t2 /2)). Here ? ? is sparse, i.e. s m. Given
(y, X, D), the purpose is to estimate ? ? as well as ? ? , and in particular, recovers the support of ? ? .
There is a large literature on this problem. Perhaps the most popular approach is the following
`1 -penalized convex optimization problem,
1
2
arg min
(1.2)
ky ? X?k2 + ? kD?k1 .
?
2n
Such a problem can be at least traced back to [ROF92] as a total variation regularization for image
denoising in applied mathematics; in statistics it is formally proposed by [Tib+05] as fused Lasso. As
D = I it reduces to the well-known Lasso [Tib96] and different choices of D include many special
cases, it is often called generalized Lasso [TT11] in statistics.
Various algorithms are studied for solving (1.2) at fixed values of the tuning parameter ?, most of
which is based on the Split Bregman or ADMM using operator splitting ideas (see for examples
[GO09; YX11; Wah+12; RT14; Zhu15] and references therein). To avoid the difficulty in dealing
with the structural sparsity in kD?k1 , these algorithms exploit an augmented variable ? to enforce
sparsity while keeping it close to D?.
On the other hand, regularization paths are crucial for model selection by computing estimators as
functions of regularization parameters. For example, [Efr+04] studies the regularization path of
standard Lasso with D = I, the algorithm in [Hoe10] computes the regularization path of fused
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Lasso, and the dual path algorithm in [TT11] can deal with generalized Lasso. Recently, [AT16]
discussed various efficient implementations of the the algorithm in [TT11], and the related R package
genlasso can be found in CRAN repository. All of these are based on homotopy method of solving
convex optimization (1.2).
Our departure here, instead of solving (1.2), is to look at an extremely simple yet novel iterative
scheme which finds a new regularization path with structural sparsity. We are going to show that
it works in a better way than genlasso, in both theory and experiments. To see this, define a loss
function which splits D? and ?,
1
1
2
2
ky ? X?k2 +
k? ? D?k2 (? > 0).
2n
2?
Now consider the following iterative algorithm,
` (?, ?) :=
?k+1 = ?k ? ???? `(?k , ?k ),
zk+1 = zk ? ??? `(?k , ?k ),
?k+1 = ? ? proxk?k1 (zk+1 ),
(1.3)
(1.4a)
(1.4b)
(1.4c)
where the initial choice z0 = ?0 = 0 ? Rm , ?0 = 0 ? Rp , parameters ? > 0, ? > 0, ? > 0,
and the proximal map associated with a convex function h is defined by proxh (z) = arg minx kz ?
xk2 /2 + h(x), which is reduced to the shrinkage operator when h is taken to be the `1 -norm,
proxk?k1 (z) = S (z, 1) where
S (z, ?) = sign(z) ? max (|z| ? ?, 0) (? ? 0).
In fact, without the sparsity enforcement (1.4c), the algorithm is called the Landweber Iteration
in inverse problems [YRC07], also known as L2 -Boost [BY02] in statistics. When D = I and
? ? 0 which enforces ? = D? = ?, the iteration (1.4) is reduced (by dropping (1.4a)) to the
popular Linearized Bregman Iteration (LBI) for linear regression or compressed sensing which is
firstly proposed in [Yin+08]. The simple iterative scheme returns the whole regularization path,
at the same cost of computing one Lasso estimator at a fixed regularization parameter using the
iterative soft-thresholding algorithm. However, LBI regularization path could be better than Lasso
regularization path which is always biased. In fact, recently [Osh+16] shows that under nearly the
same conditions as standard Lasso, LBI may achieve sign-consistency but with a less biased estimator
than Lasso, which in the limit dynamics will reach the bias-free Oracle estimator.
The difference between (1.4) and the standard LBI lies in the partial sparsity control on ?, which
splits the structural sparsity on D? into a sparse ? and D? by controlling their gap k? ? D?k2 /(2?).
Thereafter algorithm (1.4) is called Split LBI in this paper.
Split LBI generates a sequence (?k , ?k )k?N which indeed defines a discrete regularization path.
Furthermore, the path can be more accurate than that of generalized Lasso, in terms of Area Under
Curve (AUC) measurement of the order of regularization paths becoming nonzero in consistent with
the ground truth sparsity pattern. The following simple experiment illustrates these properties.
Example 1. Consider two problems: standard Lasso and 1-D fused Lasso. In both cases, set
n = p = 50, and generate X ? Rn?p denoting n i.i.d. samples from N (0, Ip ), ? N (0, In ),
y = X? ? + . ?j? = 2 (if 1 ? j ? 10), ?2 (if 11 ? j ? 15), and 0 (otherwise). For Lasso
we choose D = I, and for 1-D fused Lasso we choose D = [D1 ; D2 ] ? R(p?1+p)?p such that
(D1 ?)j = ?j ? ?j+1 (for 1 ? j ? p ? 1) and D2 = Ip . The left panel of Figure 1 shows the
regularization paths by genlasso ({D?? }) and by iteration (1.4) (linear interpolation of {?k }) with
? = 200 and ? ? {1, 5, 10}, respectively. The generalized Lasso path is in fact piecewise linear with
respect to ? while we show it along t = 1/? for a comparison. Note that the iterative paths exhibit a
variety of different shapes depending on the choice of ?. However, in terms of order of those curves
entering into nonzero range, these iterative paths exhibit a better accuracy than genlasso. Table 1
shows this by the mean AUC of 100 independent experiments in each case, where the increase of ?
improves the model selection accuracy of Split LBI paths and beats that of generalized Lasso.
Why does the simple iterative algorithm (1.4) work, even better than the generalized Lasso? In this
paper, we aim to answer it by presenting a theory for model selection consistency of (1.4).
Model selection and estimation consistency of generalized Lasso (1.2) has been studied in previous
work. [SSR12] considered the model selection consistency of the edge Lasso, with a special D in
2
Figure 1: Left shows {D?? } (t = 1/?) by genlasso and {?k } (t = k?) by Split LBI (1.4) with
? = 1, 5, 10, for 1-D fused Lasso. Right is a comparison between our family of Irrepresentable
Condition (IRR(?)) and IC in [Vai+13], with log-scale horizontal axis. As ? grows, IRR(?) can be
significantly smaller than IC0 and IC1 , so that our model selection condition is easier to be met!
Table 1: Mean AUC (with standard deviation) comparisons where Split LBI (1.4) beats genlasso.
Left is for the standard Lasso. Right is for the 1-D fused Lasso in Example 1.
genlasso
Split LBI
genlasso
Split LBI
.9426
(.0390)
1
5
10
.9845
(.0185)
.9969
(.0065)
.9982
(.0043)
.9705
(.0212)
1
5
10
.9955
(.0056)
.9996
(.0014)
.9998
(.0009)
(1.2), which has applications over graphs. [LYY13] provides an upper bound of estimation error by
assuming the design matrix X is a Gaussian random matrix. In particular, [Vai+13] proposes a general
condition called Identifiability Criterion (IC) for sign consistency. [LST13] establishes a general
framework for model selection consistency for penalized M-estimators, proposing an Irrepresentable
Condition which is equivalent to IC from [Vai+13] under the specific setting of (1.2). In fact both of
these conditions are sufficient and necessary for structural sparse recovery by generalized Lasso (1.2)
in a certain sense.
However, as we shall see soon, the benefits of exploiting algorithm (1.4) not only lie in its algorithmic
simplicity, but also provide a possibility of theoretical improvement on model selection consistency.
Below a new family of Irrepresentable Condition depending on ? will be presented for iteration (1.4),
under which model selection consistency can be established. Moreover, this family can be weaker
than IC as the parameter ? grows, which sheds light on the superb performance of Split LBI we
observed above. The main contributions of this paper can be summarized as follows: (A) a new
iterative regularization path with structural sparsity by (1.4); (B) a theory of path consistency which
shows the model selection consistency of (1.4), under some weaker conditions than generalized
Lasso, together with `2 error bounds at minimax optimal rates. Further experiments are given with
applications on 2-D image reconstruction and partial order estimation.
1.1
Notation
For matrix Q with m rows (D for example) and J ? {1, 2, . . . , m}, let QJ = QJ,? be the submatrix
of Q with rows indexed by J. However, for Q ? Rn?p (X for example) and J ? {1, 2, . . . , p}, let
QJ = Q?,J be the submatrix of Q with columns indexed by J, abusing the notation.
Sometimes we use ha, bi := aT b, denoting the inner product between vectors a, b. PL denotes the
projection matrix onto a linear subspace L, Let L1 + L2 := {?1 + ?2 : ? ? L1 , ? ? L2 } for
subspaces L1 , L2 . For a matrix Q, let Q? denotes the Moore-Penrose pseudoinverse of Q, and we
recall that Q? = (QT Q)? QT . Let ?min (Q), ?max (Q) denotes the smallest and largest singular value
(i.e. eigenvalue if Q is symmetric) of Q. For symmetric matrices P and Q, Q P (or Q P )
means that Q ? P is positive (semi)-definite, respectively. Let Q? := QT /n.
3
2
2.1
Path Consistency of Split LBI
Basic Assumptions
For the identifiability of ? ? , we assume that ? ? and its estimators of interest are restricted in
L := (ker(X) ? ker(D))? = Im X T + Im DT ,
since replacing ? ? with ?the projection of ? ? onto L? does not change the model.
Note that `(?, ?) is quadratic, and we can define its Hessian matrix which depends on ? > 0
X ? X + DT D/? ?DT /?
2
H(?) := ? ` (?, ?) ?
.
(2.1)
?D/?
Im /?
We make the following assumptions on H.
Assumption 1 (Restricted Strong Convexity (RSC)). There is a constant ?H > 0 such that
2
?
?
s
T
T
? , ?S ? H(?,S),(?,S) ?
? ?H
?S
(? ? L, ?S ? R ) .
?S
2
(2.2)
Remark 1. Since the true parameter supp(? ? ) = supp(D? ? ) = S, it is equivalent to say that the loss
`(?, ?) is strongly convex when restricting on the sparse subspace corresponding to support of ? ? .
Assumption 2 (Irrepresentable Condition (IRR)). There is a constant ? ? (0, 1] such that
0p
?
? 1 ? ?.
c
H
H
?
(2.3)
sup
S ,(?,S) (?,S),(?,S)
?
?
??[?1,1]s
Remark 2. IRR here directly generalizes the Irrepresentable Condition from standard Lasso [ZY06]
and other algorithms [Tro04], to the partial Lasso: min?,? (` (?, ?) + ?k?k1 ). Following the standard
Lasso, one version of the Irrepresentable Condition should be
0
?
?
?
HS c ,(?,S) H(?,S),(?,S) ?(?,S)
? 1 ? ?, where ?(?,S) = ??p .
?
S
??(?,S) is the value of gradient (subgradient) of `1 penalty function k ? k1 on (? ? ; ?S? ). Here ??? = 0p ,
because ? is not assumed to be sparse and hence is not penalized. Assumption 2 slightly strengthens
this by a supremum over ?, for uniform sparse recovery independent to a particular sign pattern of ? ? .
2.2
Equivalent Conditions and a Comparison Theorem
The assumptions above, though being natural, are not convenient to compare with that in [Vai+13].
Here we present some equivalent conditions, followed by a comparison theorem showing that IRR
can be weaker than IC in [Vai+13], a necessary and sufficient for model selection consistency of
generalized Lasso.
First of all, we introduce some notations. Given ?, minimizing ` solves ? = A? (?X ? y + DT ?),
where A := ?X ? X + DT D. Substituting A? (?X ? y + DT ?k ) for ?k in (1.4b), and dropping (1.4a),
we have
where
zk+1 = zk + ?(DA? X ? y ? ??k ),
?k+1 = ? ? proxk?k1 (zk+1 ),
(2.4a)
(2.4b)
? := I ? DA? DT /?, A = ?X ? X + DT D.
(2.5)
In other words, ? is the Schur complement of H?,? in Hessian matrix H(?). Comparing (2.4) with
the standard LBI (D = I) studied in [Osh+16], we know that ? in our paper plays the similar role of
X ? X in their paper. In order to obtain path consistency results of standard LBI in [Osh+16], they
propose ?Restricted Strong Convexity? and ?Irrpresentable Condition? on X ? X. So in this paper,
we can obtain similar assumptions on ? (instead of H), which actually prove to be equivalent with
Assumption 1 and 2, and closely related to literature.
Precisely, by Lemma 6 in Supplementary Information we know that Assumption 1 is equivalent to
4
Assumption 10 (Restricted Strong convexity (RSC)). There is a constant ?? > 0 such that
?S,S ?? I.
(2.6)
Remark 3. Lemma 2 in Supplementary Information says ?S,S 0 ? ker(DS c ) ? ker(X) ?
ker(DS ), which is also a natural assumption for the uniqueness of ? ? . Actually, if it fails, then there
will be some ? such that DS c ? = 0, X? = 0 while DS ? 6= 0. Thus for any ? 0? := ? ? + ?, we have
y = X? 0? + , supp(D? 0? ) ? supp(D? ? ) = S, while DS ? 0? 6= DS ? ? . Therefore one can neither
estimate ? ? nor DS ? ? even if the support set is known or has been exactly recovered.
When ?S,S 0, Lemma 7 in Supplementary Information implies that Assumption 2 is equivalent to
Assumption 20 (Irrepresentable condition (IRR)). There is a constant ? ? (0, 1] such that
c
? 1 ? ?.
(2.7)
?S ,S ??1
S,S
?
Remark 4. For standard Lasso problems (D = I), it is easy to derive ? = X ? (1 + ?XX ? )?1 X ?
X ? X when ? is small. So Assumption 10 approximates the usual Restricted Strong Convexity
assumption XS? XS ?? I and Assumption 20 approximates the usual Irrepresentable Condition
kXS? c XS (XS? XS )?1 k? ? 1 ? ? for standard Lasso problems.
The left hand side of (2.7) depends on parameter ?. From now on, define
IRR(?) :=
?S c ,S ??1
S,S
, IRR(0) := lim IRR(?), IRR(?) := lim IRR(?).
?
??0
??+?
(2.8)
Now we are going to compare Assumption 20 with the assumption in [Vai+13]. Let W be a matrix
whose columns form an orthogonal basis of ker(DS c ), and define
T
?
?S := DS? c
X ? XW W T X ? XW W T ? I DST ,
S
? sign (DS ? ? ) ? u
.
IC0 :=
?S
? , IC1 :=
min
?
T
u?ker(DS
c)
[Vai+13] proved the sign consistency of the generalized Lasso estimator of (1.2) for specifically
chosen ?, under the assumption IC1 < 1 along with ker(DS c ) ? ker(X) = {0}. As we shall see
later, the same conclusion holds under the assumption IRR(?) ? 1 ? ? along with Assumption 10
which is equivalent to ker(DS c ) ? ker(X) ? ker(DS ). Which assumption is weaker to be satisfied?
The following theorem answers this, whose proof is in Supplementary Information.
Theorem 1 (Comparisons between IRR in Assumption 20 and IC in [Vai+13]).
1. IC0 ? IC1 .
2. IRR(0) exists, and IRR(0) = IC0 .
3. IRR(?) exists, and IRR(?) = 0 if and only if ker(X) ? ker(DS ).
From this comparison theorem with a design matrix X of full column rank, as ? grows, IRR(?) <
IC1 ? IC0 , hence Assumption 20 is weaker than IC. Now recall the setting of Example 1 where
ker(X) = 0 generically. In the right panel of Figure 1, the (solid and dashed) horizontal red lines
denote IC0 , IC1 , and we see the blue curve denoting IRR(?) approaches IC0 when ? ? 0 and
approaches 0 when ? ? +?, which illustrates Theorem 1 (here each of IC0 , IC1 , IRR(?) is the
mean of 100 values calculated under 100 generated X?s). Although IRR(0) = IC0 is slightly larger
than IC1 , IRR(?) can be significantly smaller than IC1 if ? is not tiny. On the right side of the
vertical line, IRR(?) drops below 1, indicating that Assumption 20 is satisfied while the assumption
in [Vai+13] fails.
Remark 5. Despite that Theorem 1 suggests to adopt a large ?, ? can not be arbitrarily large. From
Assumption 10 and the definition of ?, 1/? ? k?k2 ? k?S,S k2 ? ?? . So if ? is too large, ?? has to
be small enough, which will deteriorates the estimator in terms of `2 error shown in the next.
2.3
Consistency of Split LBI
We are ready to establish the theorems on path consistency of Split LBI (1.4), under Assumption 1
and 2. The proofs are based on a careful treatment of the limit dynamics of (1.4) and collected in
Supplementary Information. Before stating the theorems, we need some definitions and constants.
5
Let the compact singular value decomposition (compact SVD) of D be
D = U ?V T
? ? Rr?r , ? 0, U ? Rm?r , V ? Rp?r ,
?
and (V, V? ) be an orthogonal square matrix. Let the compact SVD of X V? / n be
?
0
0
0
0
X V? / n = U1 ?1 V1T ?1 ? Rr ?r , ?1 0, U1 ? Rn?r , V1 ? R(p?r)?r ,
(2.10)
and let (V1 , V?1 ) be an orthogonal square matrix. Let
p
?X = ?max (X ? X), ?D = ?min (?) , ?D = ?max (?) , ?1 = ?min (?1 ) .
(2.11)
(2.9)
We see ?D is the largest singular value of D, ?D is the smallest nonzero singular value of D, and ?21
is the smallest nonzero eigenvalue of V? T X ? X V? . If D has full column rank, then r = p, r0 = 0, and
V? , U1 , ?1 , V1 , ?1 all drop, while V?1 ? R(p?r)?(p?r) is an orthogonal square matrix.
The following theorem says that under Assumption 1 and 2, Split LBI will automatically evolve in
an ?oracle? subspace (unknown to us) restricted within the support set of (? ? , ? ? ) before leaving it,
and if the signal parameters is strong enough, sign consistency will be reached. Moreover, `2 error
bounds on ?k and ?k are given.
Theorem 2 (Consistency of Split LBI). Under Assumption 1 and 2, suppose ? is large enough to
satisfy
?
?
s
2 + ?2 )
2
(1
+
??
4
1
?X
X
D ?
?1 +
??
1+
+
?
?D
?1 ?D
?H ?
2? ?X
?H ?2D + ?2X
?X
? (1 + ?D ) k? ? k2 +
, (2.12)
+ 2 +
?H ?D
?D
?1 ?2D
and ??kHk2 < 2. Let
? ?D
?? :=
?
8? ?X
r
j ?? k
n
, K :=
, ?0H := ?H (1 ? ??kHk2 /2) > 0.
log m
?
Then with probability not less than 1 ? 6/m ? 3 exp(?4n/5), we have all the following properties.
1. No-false-positive: The solution has no false-positive, i.e. supp(?k ) ? S, for 0 ? k? ? ? .
2. Sign consistency of ?k : Once the signal is strong enough such that
?
?min
?
:= (DS ? )min
?X ?D
16?
?
(2 log s + 5 + log(8?D ))
?
0
??H (1 ? 5?/?
?)
?2D
r
log m
,
n
(2.13)
then ?k has sign consistency at K, i.e. sign (?K ) = sign (D? ? ).
3. `2 consistency of ?k :
42?
?X
k?K ? D? k2 ?
?
0
??H (1 ? ?/?
? ) ?D
r
s log m
.
n
r
s log m 2?
+
n
?1
?
4. `2 consistency of ?k :
42?
?1 ?X (1 + ?D ) + ?2X
k?K ? ? k2 ?
?
??0H (1 ? ?/?
?)
?1 ?2D
?
+ ? ? 2? ?
r
r0 log m
n
?1 ?X + ?2X
.
?1 ?2D
Despite that the sign consistency of ?k can be established here, usually one can not expect D?k
recovers the sparsity pattern of ? ? due to the variable splitting. As shown in the last term of `2 error
bound of ?k , increasing ? will sacrifice its accuracy. However, one can remedy this by projecting
?k on to a subspace using the support set of ?k , and obtain a good estimator ??k with both sign
consistency and `2 consistency at the minimax optimal rates.
6
Theorem 3 (Consistency of revised version of Split LBI). Under Assumption 1 and 2, suppose ? is
large enough to satisfy (2.12), and ??kHk2 < 2. ??, K, ?0H are defined the same as in Theorem 2.
Define
Sk := supp(?k ), PSk := PkerD c = I ? DS? c DSkc , ??k := PSk ?k .
S
If
Skc
k
k
= ?, define PSk = I. Then we have the following properties.
?
1. Sign consistency of ??k : If the ?min
condition (2.13) holds, then with probability not less
than 1 ? 8/m ? 3 exp(?4n/5), there holds sign(D??K ) = sign(D? ? ).
2. `2 consistency of ??k : With probability not less than 1 ? 8/m ? 2r0 /m2 ? 3 exp(?4n/5),
we have that for 0 ? k? ? ??,
!
r
?
10 s
2? ?X ?D s log m
?
?
?
+
?k ? ?
?
?0H k? ?0H
?3D
n
2
r 0
2? ?X
r log m
?0H ?2D + ?2X
+ 0
+
+ 2
DS? c DSkc ?S ? ?
.
2
2
k
?H ?D
?1 ?D
n
2
Consequently, if additionally SK = S, then the last term on the right hand side drops for
k = K, and it reaches
r
?X ?D + ?2D
80?
s log m
?
?
?
?K ? ?
?
0
3
??H (1 ? ?/?
?)
?D
n
2
r 0
2? ?X
?0H ?2D + ?2X
r log m
+ 0
+
.
?H ?2D
?1 ?2D
n
0
0
Remark 6. Note that
p r ? min(n, p ? r). In many real applications, r is very small. So the dominant
`2 error rate is O( s log m/n), which is minimax optimal [LST13; LYY13].
3
3.1
Experiments
Parameter Setting
Parameter ? should be large enough according to (2.12). Moreover, step size ? should be small
enough to ensure the stability of Split LBI. When ?, ? are determined, ? can actually be determined
by ? = ?/(?(1 + ??2X + ?2D )) (see (C.6) in Supplementary Information).
3.2
Application: Image Denoising
Consider the image denoising problem in [TT11]. The original image is resized to 50 ? 50, and reset
with only four colors, as in the top left image in Figure 2. Some noise is added by randomly changing
some pixels to be white, as in the bottom left. Let G = (V, E) is the 4-nearest-neighbor grid graph on
pixels, then ? = (?R , ?G , ?B ) ? R3|V | since there are 3 color channels (RGB channels). X = I3|V |
and D = diag(DG , DG , DG ), where DG ? ? R|E|?|V | is the gradient operator on graph G defined
by (DG x)(eij ) = xi ? xj , eij ? E. Set ? = 180, ? = 100. The regularization path of Split LBI is
shown in Figure 2, where as t evolves, images on the path gradually select visually salient features
before picking up the random noise. Now compare the AUC (Area Under Curve) of genlasso and
Split LBI algorithm with different ?. For simplicity we show the AUC corresponding to the red
color channel. Here ? ? {1, 20, 40, 60, . . . , 300}. As shown in the right panel of Figure 2, with the
increase of ?, Split LBI beats genlasso with higher AUC values.
3.3
Application: Partial Order Ranking for Basketball Teams
Here we consider a new application on the ranking of p = 12 FIBA basketball teams into partial
orders. The teams are listed in Figure 3. We collected n = 134 pairwise comparison game results
mainly from various important championship such as Olympic Games, FIBA World Championship
7
Original Figure
t =9.3798
t =23.7812
Noisy Figure
t =60.5532
t =617.1275
Figure 2: Left is image denoising results by Split LBI. Right shows the AUC of Split LBI (blue solid
line) increases and exceeds that of genlasso (dashed red line) as ? increases.
Figure 3: Partial order ranking for basketball teams. Top left: {?? } (t = 1/?) by genlasso and
??k (t = k?) by Split LBI. Top right: grouping result just passing t5 . Bottom: FIBA ranking.
and FIBA Basketball Championship in 5 continents from 2006?2014 (8 years is not too long for teams
to keep relatively stable levels while not too short to have enough samples). For each sample indexed
by k and corresponding team pair (i, j), yk = si ? sj is the score difference between team i and j.
We assume a model yk = ?i?k ? ?j?k + k where ? ? ? Rp measures the strength of these teams. So the
design matrix X ? Rn?p is defined by its k-th row: xk,ik = 1, xk,jk = ?1, xk,l = 0 (l 6= ik , jk ).
In sports, teams of similar strength often meet than those in different levels. Thus we hope to find a
coarse grained partial order ranking by adding a structural sparsity on D? ? where D = cX (c scales
the smallest nonzero singular value of D to be 1).
The top left panel of Figure 3 shows {?? } by genlasso and ??k by Split LBI with ? = 1 and ? = 100.
Both paths give the same partial order at early stages, though the Split LBI path looks qualitatively
better. For example, the top right panel shows the same partial order after the change point t5 . It is
interesting to compare it against the FIBA ranking in September, 2014, shown in the bottom. Note
that the average basketball level in Europe is higher than that of in Asia and Africa, hence China can
get more FIBA points than Germany based on the dominant position in Asia, so is Angola in Africa.
But their true levels might be lower than Germany, as indicated in our results. Moreover, America
(FIBA points 1040.0) itself forms a group, agreeing with the common sense that it is much better
than any other country. Spain, having much higher FIBA ranking points (705.0) than the 3rd team
Argentina (455.0), also forms a group alone. It is the only team that can challenge America in recent
years, and it enters both finals against America in 2008 and 2012.
Acknowledgments
The authors were supported in part by National Basic Research Program of China under grants
2012CB825501 and 2015CB856000, as well as NSFC grants 61071157 and 11421110001.
8
References
[AT16]
[BY02]
[Efr+04]
[GO09]
[Hoe10]
[LST13]
[LYY13]
[Moe12]
[Osh+16]
[ROF92]
[RT14]
[SSR12]
[Tib+05]
[Tib96]
[Tro04]
[TT11]
[Vai+13]
[Wah+12]
[Yin+08]
[YRC07]
[YX11]
[Zha06]
[Zhu15]
[ZY06]
Taylor B. Arnold and Ryan J. Tibshirani. ?Efficient Implementations of the Generalized Lasso Dual
Path Algorithm?. In: Journal of Computational and Graphical Statistics 25.1 (2016), pp. 1?27.
Peter B?hlmann and Bin Yu. ?Boosting with the L2 -Loss: Regression and Classification?. In:
Journal of American Statistical Association 98 (2002), pp. 324?340.
B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. ?Least angle regression?. In: The Annals of
statistics 32.2 (2004), pp. 407?499.
Tom Goldstein and Stanley Osher. ?Split Bregman method for large scale fused Lasso?. In: SIAM
Journal on Imaging Sciences 2.2 (2009), pp. 323?343.
Holger Hoefling. ?A Path Algorithm for the Fused Lasso Signal Approximator?. In: Journal of
Computational and Graphical Statistics 19.4 (2010), pp. 984?1006.
Jason D Lee, Yuekai Sun, and Jonathan E Taylor. ?On model selection consistency of penalized
M-estimators: a geometric theory?. In: Advances in Neural Information Processing Systems (NIPS)
26. 2013, pp. 342?350.
Ji Liu, Lei Yuan, and Jieping Ye. ?Guaranteed Sparse Recovery under Linear Transformation?. In:
Proceedings of The 30th International Conference on Machine Learning. 2013, pp. 91?99.
Michael Moeller. ?Multiscale Methods for Polyhedral Regularizations and Applications in High
Dimensional Imaging?. PhD thesis. Germany: University of Muenster, 2012.
Stanley Osher, Feng Ruan, Jiechao Xiong, Yuan Yao, and Wotao Yin. ?Sparse recovery via
differential inclusions?. In: Applied and Computational Harmonic Analysis (2016). DOI: 10 .
1016/j.acha.2016.01.002.
Leonid I. Rudin, Stanley Osher, and Emad Fatemi. ?Nonlinear Total Variation Based Noise
Removal Algorithms?. In: Physica D: Nonlinear Phenomena 60.1-4 (Nov. 1992), pp. 259?268.
Aaditya Ramdas and Ryan J. Tibshirani. ?Fast and Flexible ADMM Algorithms for Trend Filtering?. In: Journal of Computational and Graphical Statistics (2014). DOI: 10.1080/10618600.
2015.1054033.
James Sharpnack, Aarti Singh, and Alessandro Rinaldo. ?Sparsistency of the edge lasso over
graphs?. In: International Conference on Artificial Intelligence and Statistics. 2012, pp. 1028?
1036.
Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. ?Sparsity and
smoothness via the fused lasso?. In: Journal of the Royal Statistical Society Series B (2005),
pp. 91?108.
Robert Tibshirani. ?Regression shrinkage and selection via the lasso?. In: Journal of the Royal
Statistical Society. Series B (Methodological) (1996), pp. 267?288.
Joel A. Tropp. ?Greed is good: Algorithmic results for sparse approximation?. In: IEEE Trans.
Inform. Theory 50.10 (2004), pp. 2231?2242.
Ryan J. Tibshirani and Jonathan Taylor. ?The solution path of the generalized lasso?. In: The
Annals of Statistics 39.3 (June 2011), pp. 1335?1371.
S. Vaiter, G. Peyre, C. Dossal, and J. Fadili. ?Robust Sparse Analysis Regularization?. In: IEEE
Transactions on Information Theory 59.4 (Apr. 2013), pp. 2001?2016.
Bo Wahlberg, Stephen Boyd, Mariette Annergren, and Yang Wang. ?An ADMM Algorithm for a
Class of Total Variation Regularized Estimation Problems?. In: IFAC Proceedings Volumes. 16th
IFAC Symposium on System Identification 45.16 (2012), pp. 83?88.
Wotao Yin, Stanley Osher, Jerome Darbon, and Donald Goldfarb. ?Bregman Iterative Algorithms
for Compressed Sensing and Related Problems?. In: SIAM Journal on Imaging Sciences 1.1 (2008),
pp. 143?168.
Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. ?On Early Stopping in Gradient Descent
Learning?. In: Constructive Approximation 26.2 (2007), pp. 289?315.
Gui-Bo Ye and Xiaohui Xie. ?Split Bregman method for large scale fused Lasso?. In: Computational Statistics & Data Analysis 55.4 (2011), pp. 1552?1569.
Fuzhen Zhang. The Schur Complement and Its Applications. Springer Science & Business Media,
2006. 308 pp. ISBN: 978-0-387-24273-6.
Yunzhang Zhu. ?An augmented ADMM algorithm with application to the generalized lasso problem?. In: Journal of Computational and Graphical Statistics (2015). DOI: 10.1080/10618600.
2015.1114491.
Peng Zhao and Bin Yu. ?On Model Selection Consistency of Lasso?. In: Journal of Machine
Learning Research 7 (2006), pp. 2541?2567.
9
| 6288 |@word h:1 kong:1 repository:1 version:2 norm:1 d2:2 linearized:2 rgb:1 decomposition:1 solid:2 initial:1 liu:1 series:2 score:1 denoting:3 outperforms:1 africa:2 recovered:1 comparing:1 osh:4 si:1 yet:1 ust:1 shape:1 drop:3 championship:3 alone:1 intelligence:1 rudin:1 xk:3 short:1 provides:1 coarse:1 boosting:1 firstly:1 zhang:1 along:3 differential:1 symposium:1 ik:2 yuan:4 prove:1 polyhedral:1 introduce:1 pairwise:1 peng:1 sacrifice:1 indeed:1 andrea:1 nor:1 v1t:1 automatically:1 equipped:1 increasing:1 spain:2 xx:1 moreover:4 notation:3 panel:5 medium:1 proposing:1 superb:1 argentina:1 transformation:2 shed:1 exactly:1 rm:3 k2:9 control:1 grant:2 positive:3 before:3 limit:2 despite:3 nsfc:1 meet:1 path:31 becoming:1 interpolation:1 might:1 therein:1 studied:3 china:2 suggests:1 range:1 bi:1 acknowledgment:1 enforces:1 definite:1 ker:15 area:2 significantly:2 projection:2 convenient:1 word:1 boyd:1 donald:1 get:1 onto:2 irrepresentable:9 selection:15 operator:3 close:1 equivalent:8 map:1 xiaohui:1 jieping:1 fadili:1 convex:4 simplicity:3 splitting:3 recovery:5 m2:1 estimator:10 stability:1 variation:3 annals:2 controlling:1 play:1 suppose:2 element:1 trend:1 strengthens:1 jk:2 observed:1 role:1 tib:2 bottom:3 enters:1 wang:1 sun:1 knight:1 yk:2 alessandro:1 convexity:4 dynamic:2 singh:1 solving:3 basis:1 various:3 america:3 fast:1 doi:3 artificial:1 saunders:1 whose:2 huang1:1 supplementary:6 larger:1 say:3 otherwise:1 compressed:2 statistic:11 noisy:2 itself:1 ip:2 final:1 sequence:1 eigenvalue:2 rr:2 isbn:1 reconstruction:1 propose:1 product:1 reset:1 moeller:1 achieve:2 ky:2 exploiting:1 depending:2 derive:1 stating:1 nearest:1 qt:3 keith:1 strong:6 solves:1 ic1:9 implies:1 met:1 closely:1 bin:2 homotopy:1 ryan:3 im:3 pl:1 physica:1 hold:3 considered:1 ground:1 ic:7 exp:5 visually:1 algorithmic:2 substituting:1 early:3 smallest:4 xk2:1 adopt:1 purpose:1 uniqueness:1 estimation:4 aarti:1 emad:1 largest:2 establishes:1 hope:1 gaussian:2 always:1 aim:1 i3:1 avoid:1 shrinkage:2 resized:1 ic0:9 june:1 improvement:1 methodological:1 rank:2 sharpnack:1 mainly:1 hk:1 sense:2 stopping:2 going:2 germany:3 pixel:2 arg:2 dual:2 classification:1 flexible:1 proposes:1 special:2 ruan:1 once:1 having:1 holger:1 look:2 yu:2 nearly:1 t2:1 piecewise:1 randomly:1 dg:5 national:1 sparsistency:1 gui:1 interest:1 possibility:1 joel:1 generically:1 light:1 accurate:1 bregman:6 edge:2 partial:10 necessary:3 orthogonal:4 indexed:3 pku:1 taylor:3 theoretical:1 rsc:2 column:4 soft:1 hlmann:1 cost:1 deviation:1 uniform:1 too:3 answer:2 proximal:1 rosset:1 dossal:1 international:2 siam:2 lee:1 picking:1 michael:2 together:1 fused:10 yao:2 thesis:1 satisfied:2 choose:2 rosasco:1 american:1 zhao:1 return:1 supp:7 summarized:1 vaiter:1 satisfy:2 ranking:8 depends:2 later:1 jason:1 sup:1 red:3 reached:1 identifiability:2 contribution:1 square:3 accuracy:3 acha:1 identification:1 reach:2 inform:1 definition:2 against:2 pp:20 james:1 associated:1 proof:2 recovers:2 proved:1 treatment:1 popular:3 recall:2 lim:2 efron:1 color:3 improves:1 stanley:4 landweber:1 actually:3 back:1 goldstein:1 proxh:1 higher:3 irr:23 dt:8 xie:1 asia:2 tom:1 though:2 strongly:1 furthermore:2 just:1 stage:1 hoefling:1 jerome:1 d:18 hand:3 cran:1 horizontal:2 tropp:1 replacing:1 multiscale:1 nonlinear:2 abusing:1 defines:1 perhaps:1 indicated:1 lei:1 grows:3 ye:2 true:2 remedy:1 regularization:19 hence:4 entering:1 symmetric:2 nonzero:5 moore:1 goldfarb:1 illustrated:1 deal:1 white:1 game:2 basketball:5 auc:7 hong:1 generalized:16 criterion:1 presenting:1 continent:1 l1:3 aaditya:1 saharon:1 image:9 harmonic:1 novel:2 recently:2 common:1 lbi:33 ji:2 volume:1 discussed:1 association:1 approximates:2 measurement:2 smoothness:1 tuning:1 rd:1 consistency:32 mathematics:1 grid:1 inclusion:1 stable:1 europe:1 dominant:2 recent:1 certain:1 arbitrarily:1 r0:3 dashed:2 semi:1 signal:3 full:2 stephen:1 yuekai:1 reduces:1 caponnetto:1 exceeds:1 ifac:2 long:1 zha06:1 peking:1 regression:4 basic:2 iteration:6 sometimes:1 singular:5 leaving:1 country:1 crucial:1 khk2:3 biased:2 schur:2 structural:9 yang:1 split:32 identically:1 easy:1 enough:8 variety:1 xj:1 sun1:1 hastie:1 lasso:44 inner:1 idea:1 cn:1 qj:3 utility:1 greed:1 penalty:1 peter:1 wahlberg:1 hessian:2 passing:1 remark:6 listed:1 reduced:2 generate:1 sign:16 deteriorates:1 tibshirani:6 darbon:1 blue:2 discrete:1 dropping:2 shall:2 group:2 thereafter:1 four:1 salient:1 traced:1 changing:1 neither:1 v1:3 imaging:3 graph:4 subgradient:1 year:2 package:1 inverse:1 angle:1 dst:1 family:4 submatrix:2 bound:5 followed:1 guaranteed:1 quadratic:1 oracle:2 strength:2 precisely:1 generates:1 u1:3 min:10 extremely:1 relatively:1 according:1 kd:2 smaller:2 slightly:2 agreeing:1 evolves:1 osher:4 projecting:1 restricted:6 gradually:1 taken:1 r3:1 enforcement:1 know:2 generalizes:1 enforce:1 xiong:1 rp:4 original:2 denotes:3 top:5 include:1 ensure:1 graphical:4 xw:2 exploit:1 k1:7 establish:1 society:2 feng:1 added:1 usual:2 traditional:1 exhibit:2 minx:1 gradient:3 subspace:5 september:1 collected:2 assuming:1 fatemi:1 minimizing:1 robert:2 vai:10 design:4 implementation:2 proper:1 unknown:1 wotao:2 upper:1 vertical:1 revised:1 descent:1 beat:3 team:11 rn:5 peyre:1 complement:2 pair:1 wah:2 efr:2 established:2 barcelona:1 boost:1 nip:2 trans:1 below:2 pattern:3 usually:1 departure:1 sparsity:14 challenge:1 yrc07:2 program:1 max:4 royal:2 difficulty:1 natural:2 regularized:1 business:1 zhu:2 minimax:4 scheme:2 technology:1 lorenzo:1 axis:1 ready:1 literature:2 l2:5 geometric:1 removal:1 evolve:1 loss:3 expect:1 interesting:1 filtering:1 approximator:1 tro04:2 sufficient:3 consistent:1 thresholding:1 tiny:1 row:3 proxk:3 penalized:4 supported:1 last:2 free:1 keeping:1 soon:1 zy06:2 bias:1 weaker:6 side:3 arnold:1 neighbor:1 johnstone:1 sparse:10 benefit:2 distributed:1 curve:4 calculated:1 world:1 computes:1 kz:1 t5:2 qualitatively:1 author:1 transaction:1 sj:1 nov:1 compact:3 supremum:1 dealing:1 keep:1 pseudoinverse:1 assumed:1 xi:1 iterative:11 sk:2 why:1 table:2 additionally:1 olympic:1 channel:3 zk:6 robust:1 da:2 diag:1 apr:1 main:1 whole:1 noise:3 ramdas:1 augmented:2 sub:1 fails:2 position:1 lie:2 grained:1 z0:1 theorem:13 specific:1 showing:1 sensing:2 x:5 grouping:1 exists:2 restricting:1 false:2 adding:1 phd:1 illustrates:2 gap:1 easier:1 cx:1 yin:4 eij:2 penrose:1 rinaldo:1 sport:1 bo:2 springer:1 truth:1 satisfies:1 consequently:1 careful:1 admm:4 psk:3 change:2 leonid:1 specifically:1 determined:2 denoising:5 lemma:3 called:5 total:3 kxs:1 svd:2 indicating:1 formally:1 select:1 support:5 jonathan:2 tib96:2 constructive:1 d1:2 phenomenon:1 |
5,845 | 6,289 | An Ensemble Diversity Approach
to Supervised Binary Hashing
? Carreira-Perpin?
? an
Miguel A.
EECS, University of California, Merced
[email protected]
Ramin Raziperchikolaei
EECS, University of California, Merced
[email protected]
Abstract
Binary hashing is a well-known approach for fast approximate nearest-neighbor
search in information retrieval. Much work has focused on affinity-based objective
functions involving the hash functions or binary codes. These objective functions
encode neighborhood information between data points and are often inspired by
manifold learning algorithms. They ensure that the hash functions differ from each
other through constraints or penalty terms that encourage codes to be orthogonal
or dissimilar across bits, but this couples the binary variables and complicates the
already difficult optimization. We propose a much simpler approach: we train
each hash function (or bit) independently from each other, but introduce diversity
among them using techniques from classifier ensembles. Surprisingly, we find
that not only is this faster and trivially parallelizable, but it also improves over the
more complex, coupled objective function, and achieves state-of-the-art precision
and recall in experiments with image retrieval.
Information retrieval tasks such as searching for a query image or document in a database are essentially a nearest-neighbor search [33]. When the dimensionality of the query and the size of the
database is large, approximate search is necessary. We focus on binary hashing [17], where the query
and database are mapped onto low-dimensional binary vectors, where the search is performed. This
has two speedups: computing Hamming distances (with hardware support) is much faster than computing distances between high-dimensional floating-point vectors; and the entire database becomes
much smaller, so it may reside in fast memory rather than disk (for example, a database of 1 billion
real vectors of dimension 500 takes 2 TB in floating point but 8 GB as 64-bit codes).
Constructing hash functions that do well in retrieval measures such as precision and recall is usually
done by optimizing an affinity-based objective function that relates Hamming distances to supervised neighborhood information in a training set. Many such objective functions have the form of a
sum of pairwise terms that indicate whether the training points xn and xm are neighbors:
PN
minh L(h) = n,m=1 L(zn , zm ; ynm ) where zm = h(xm ), zn = h(xn ).
Here, X = (x1 , . . . , xN ) is the dataset of high-dimensional feature vectors (e.g., SIFT features of
an image), h: RD ? {?1, +1}b are b binary hash functions and z = h(x) is the b-bit code vector
for input x ? RD , minh means minimizing over the parameters of the hash function h (e.g. over
the weights of a linear SVM), and L(?) is a loss function that compares the codes for two images
(often through their Hamming distance kzn ? zm k) with the ground-truth value ynm that measures
the affinity in the original space between the two images xn and xm (distance, similarity or other
measure of neighborhood). The sum is often restricted to a subset of image pairs (n, m) (for example,
within the k nearest neighbors of each other in the original space), to keep the runtime low. The
output of the algorithm is the hash function h and the binary codes Z = (z1 , . . . , zN ) for the training
points, where zn = h(xn ) for n = 1, . . . , N . Examples of these objective functions are Supervised
Hashing with Kernels (KSH) [28], Binary Reconstructive Embeddings (BRE) [21] and the binary
Laplacian loss (an extension of the Laplacian Eigenmaps objective; [2]) where L(zn , zm ; ynm ) is:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
2
LAP: ynm kzn ? zm k2
(1)
KSH: (zTn zm ? bynm )2
BRE: 1b kzn ? zm k2 ? ynm
where for KSH ynm is 1 if xn , xm are similar and ?1 if they are dissimilar; for BRE ynm =
2
1
2 kxn ? xm k (where the dataset X is normalized so the Euclidean distances are in [0, 1]); and for
the Laplacian loss ynm > 0 if xn , xm are similar and < 0 if they are dissimilar (?positive? and
?negative? neighbors). Other examples of these objectives include models developed for dimension
reduction, be they spectral such as Locally Linear Embedding [32] or Anchor Graphs [27], or nonlinear such as the Elastic Embedding [7] or t-SNE; as well as objectives designed specifically for
binary hashing, such as Semi-supervised sequential Projection Learning Hashing (SPLH) [34]. They
all can produce good hash functions. We will focus on the Laplacian loss in this paper.
In designing these objective functions, one needs to eliminate two types of trivial solutions. 1) In
the Laplacian loss, mapping all points to the same code, i.e., z1 = ? ? ? = zN , is the global optimum
of the positive neighbors term (this also arises if the codes zn are real-valued, as in Laplacian eigenmaps). This can be avoided by having negative neighbors. 2) Having all hash functions (all b bits
of each vector) being identical to each other, i.e., zn1 = ? ? ? = znb for each n = 1, . . . , N . This
can be avoided by introducing constraints, penalty terms or other mathematical devices that couple
the b bits. For example, in the Laplacian loss (1) we can encourage codes to be orthogonal through
2
a constraint ZT Z = N I [35] or a penalty term kZT Z ? N Ik (with a hyperparameter that controls
the weight of the penalty) [14], although this generates dense matrices of N ? N . In the KSH or
BRE (1), squaring the dot product or Hamming distance between the codes couples the b bits.
An important downside of these approaches is the difficulty of their optimization. This is due to
the fact that the objective function is nonsmooth (implicitly discrete) because of the binary output
of the hash function. There is a large number of such binary variables (bN ), a larger number of
pairwise interactions (O(N 2 ), less if using sparse neighborhoods) and the variables are coupled by
the said constraints or penalty terms. The optimization is approximated in different ways. Most
papers ignore the binary nature of the Z codes and optimize over them as real values, then binarize
them by truncation (possibly with an optimal rotation; [16]), and finally fit a classifier (e.g. linear
SVM) to each of the b bits separately. For example, for the Laplacian loss with constraints this
involves solving an eigenproblem on Z as in Laplacian eigenmaps [2, 35, 36], or approximated using
landmarks [27]. This is fast, but relaxing the codes in the optimization is generally far from optimal.
Some recent papers try to respect the binary nature of the codes during their optimization, using
techniques such as alternating optimization, min-cut and GraphCut [4, 14, 26] or others [25], and
then fit the classifiers, or use alternating optimization directly on the hash function parameters [28].
Even more recently, one can optimize jointly over the binary codes and hash functions [8, 14, 31].
Most of these approaches are slow and limited to small datasets (a few thousand points) because of
the quadratic number of pairwise terms in the objective.
We propose a different, much simpler approach. Rather than coupling the b hash functions into
a single objective function, we train each hash function independently from each other and using
a single-bit objective function of the same form. We show that we can avoid trivial solutions by
injecting diversity into each hash function?s training using techniques inspired from classifier ensemble learning. Section 1 discusses relevant ideas from the ensemble learning literature, section 2
describes our independent Laplacian hashing algorithm, section 3 gives evidence with image retrieval datasets that this simple approach indeed works very well, and section 4 further discusses the
connection between hashing and ensembles.
1 Ideas from learning classifier ensembles
At first sight, optimizing Laplacian loss without constraints does not seem like a good idea: since
kzn ? zm k2 separates over the b bits, we obtain b independent identical objectives, one over each
hash function, and so they all have the same global optimum. And, if all hash functions are equal,
they are equivalent to using just one of them, which will give a much lower precision/recall. In
fact, the very same issue arises when training an ensemble of classifiers [10, 22]. Here, we have
a training set of input vectors and output class labels, and want to train several classifiers whose
outputs are then combined (usually by majority vote). If the classifiers are all equal, we gain nothing
over a single classifier. Hence, it is necessary to introduce diversity among the classifiers so that they
disagree in their predictions. The ensemble learning literature has identified several mechanisms to
inject diversity. The most important ones that apply to our binary hashing setting are as follows:
Using different data for each classifier This can be done by: 1) Using different feature subsets for
each classifier. This works best if the features are somewhat redundant. 2) Using different
2
training sets for each classifier. This works best for unstable algorithms (whose resulting
classifier is sensitive to small changes in the training data), such as decision trees or neural
nets, and unlike linear or nearest neighbor classifiers. A prominent example is bagging [6],
which generates bootstrap datasets and trains a model on each.
Injecting randomness in the training algorithm This is only possible if local optima exist (as for
neural nets) or if the algorithm is randomized (as for decision trees). This can be done by
using different initializations, adding noise to the updates or using different choices in the
randomized operations (e.g. the choice of split in decision trees, as in random forests; [5]).
Using different classifier models For example, different parameters (e.g. the number of neighbors
in a nearest-neighbor classifier), different architectures (e.g. neural nets with different number of layers or hidden units), or different types of classifiers altogether.
2 Independent Laplacian Hashing (ILH) with diversity
The connection of binary hashing with ensemble learning offers many possible options, in terms of
the choice of type of hash function (?base learner?), binary hashing (single-bit) objective function,
optimization algorithm, and diversity mechanism. In this paper we focus on the following choices.
We use linear and kernel SVMs as hash functions. Without loss of generality (see later), we use the
Laplacian objective (1), which for a single bit takes the form
PN
E(z) = n,m=1 ynm (zn ? zm )2 , zn = h(xn ) ? {?1, 1}, n = 1, . . . , N.
(2)
To optimize it, we use a two-step approach, where we first optimize (2) over the N bits and then
learn the hash function by fitting to it a binary classifier. (It is also possible to optimize over the
hash function directly with the method of auxiliary coordinates; [8, 31], which essentially iterates
over optimizing (2) and fitting the classifier.) The Laplacian objective (2) is NP-complete if we have
negative neighbors (i.e., some ynm < 0). We approximately optimize it using a min-cut algorithm
(as implemented in [4]) applied in alternating fashion to submodular blocks as described in Lin et al.
[24]. This first partitions the N points into disjoint groups containing only nonnegative weights.
Each group defines a submodular function (specifically, quadratic with nonpositive coefficients)
whose global minimum can be found in polynomial time using min-cut. The order in which the
groups are optimized over is randomized at each iteration (this improves over using a fixed order).
The approximate optimizer found depends on the initial z ? {?1, 1}N .
Finally, we consider three types of diversity mechanism (as well as their combination):
Different initializations (ILHi) Each hash function is initialized from a random N -bit vector z.
Different training sets (ILHt) Each hash function uses a training set of N points that is different
and (if possible) disjoint from that of other hash functions. We can afford to do this because
in binary hashing the training sets are potentially very large, and the computational cost
of the optimization limits the training sets to a few thousand points. Later we show this
outperforms using bootstrapped training sets.
Different feature subsets (ILHf) Each hash function is trained on a random subset of 1 ? d ? D
features sampled without replacement (so the d features are distinct). The subsets corresponding to different hash functions may overlap.
These mechanisms are applicable to other objective functions beyond (2). We could also use the
same training set but construct differently the weight matrix in (2) (e.g. using different numbers of
positive and negative neighbors).
Equivalence of objective functions in the single-bit case Several binary hashing objectives that
differ in the general case of b > 1 bits become essentially identical in the b = 1 case. For example,
expanding the pairwise terms in (1) (noting that zn2 = 1 if zn ? {?1, +1}) gives L(zn , zm ; ynm ) as
KSH: ?2ynm zn zm +constant BRE: ?4(2?ynm)zn zm +constant LAP: ?2ynm zn zm +constant.
So all the three objectives are in fact identical and can be written in the form of a binary quadratic
function without linear term (or a Markov random field with quadratic potentials only):
minz E(z) = zT Az with z ? {?1, +1}N
(3)
with an appropriate, data-dependent neighborhood symmetric matrix A of N ? N . This problem
is NP-complete in general [3, 13, 18], when A has both positive and negative elements, as well
as zeros. It is submodular if A has only nonpositive elements, in which case it is equivalent to a
min-cut/max-flow problem and it can be solved in polynomial time [3].
3
PN
More generally, any function of a binary vector z that has the form E(z) = n,m=1 fnm (zn , zm )
and which only depends on Hamming distances between bits zn , zm can be written as
fnm (zn , zm ) = anm zn zm + bnm . Even more, an arbitrary function of 3 binary variables that
depends only on their Hamming distances can be written as a quadratic function of the 3 variables.
However, for 4 variables or more this is not generally true (see supplementary material).
Computational advantages Training the hash functions independently has some important advantages. First, training the b functions can be parallelized perfectly. This is a speedup of one to
two orders of magnitude for typical values of b (32 to 200 in our experiments). Coupled objective
functions such as KSH do not exhibit obvious parallelism, because they are trained with alternating
optimization, which is inherently sequential.
Second, even in a single processor, b binary optimizations over N variables each is generally easier
than one binary optimization over bN variables. This is so because the search spaces contain b2N
and 2bN states, respectively, so enumeration is much faster in the independent case (even though
it is still impractical). If using an approximate polynomial-time algorithm, the independent case is
also faster if the runtime is superlinear on the number of variables: the asymptotic runtimes will be
O(bN ? ) and O((bN )? ) with ? > 1, respectively. This is the case for the best practical GraphCut
[4] and max-flow/min-cut algorithms [9].
Third, the solution exhibits ?nesting?, that is, to get the solution for b + 1 bits we just need to take a
solution with b bits and add one more bit (as happens with PCA). This is unlike most methods based
on a coupled objective function (such as KSH), where the solution for b + 1 bits cannot be obtained
by adding one more bit, we have to solve for b + 1 bits from scratch.
For ILHf, both the training and test time are lower than if using all D features for each hash function.
The test runtime for a query is d/D times smaller.
Model selection for the number of bits b Selecting the number of bits (hash functions) to use has
not received much attention in the binary hashing literature. The most obvious way to do this would
be to maximize the precision on a test set over b (cross-validation) subject to b not exceeding a preset
limit (so applying the hash function is fast with test queries). The nesting property of ILH makes
this computationally easy: we simply keep adding bits until the test precision stabilizes or decreases,
or until we reach the maximum b. We can still benefit from parallel processing: if P processors are
available, we train P hash functions in parallel and evaluate their precision, also in parallel. If we
still need to increase b, we train P more hash functions, etc.
3 Experiments
We use the following labeled datasets (all using the Euclidean distance in feature space): (1) CIFAR
[19] contains 60 000 images in 10 classes. We use D = 320 GIST features [30] from each image.
We use 58 000 images for training and 2 000 for test. (2) Infinite MNIST [29]. We generated, using
elastic deformations of the original MNIST handwritten digit dataset, 1 000 000 images for training
and 2 000 for test, in 10 classes. We represent each image by a D = 784 vector of raw pixels. The
supplementary material contains experiments on additional datasets.
Because of the computational cost of affinity-based methods, previous work has used training sets
limited to a few thousand points [14, 21, 25, 28]. Unless otherwise indicated, we train the hash
functions in a subset of 5 000 points of the training set, and report precision and recall by searching
for a test query on the entire dataset (the base set). As hash functions (for each bit), we use linear
SVMs (trained with LIBLINEAR; [12]) and kernel SVMs (with 500 basis functions centered at a
random subset of training points). We report precision and recall for the test set queries using as
ground truth (set of true neighbors in original space) all the training points with the same label as the
query. The retrieved set contains the k nearest neighbors of the query point in the Hamming space.
We report precision for different values of k to test the robustness of different algorithms.
Diversity mechanisms with ILH To understand the effect of diversity, we evaluate the 3 mechanisms ILHi, ILHt and ILHf, and their combination ILHitf, over a range of number of bits b (32 to
128) and training set size N (2 000 to 20 000). As baseline coupled objective, we use KSH [28] but
using the same two-step training as ILH: first we find the codes using the alternating min-cut method
described earlier (initialized from an all-ones code, and running one iteration of alternating min-cut)
and then we fit the classifiers. This is faster and generally finds better optima than the original KSH
optimization [26]. We denote it as KSHcut.
4
ILHi
ILHt
ILHf
ILHitf
KSHcut
45
45
45
45
40
40
40
40
40
35
35
35
35
35
linear h
45
30
30
0.2
b=32
b=64
b=128
0.5
2 0.2
1
0.5
1
4
x 10
52
48
48
44
44
40
40
kernel h
52
0.2
0.5
1
2 0.2
30
2 0.2
4
x 10
52
b=32
b=64
b=128
0.5
1
30
0.5
1
2 0.2
30
0.5
1
4
2 0.2
x 10
48
48
48
44
44
44
40
0.5
1
2 0.2
2
4
x 10
52
40
1
x 10
52
2 0.2
0.5
4
40
0.5
1
2 0.2
0.5
1
2
N
N
N
N
N
x 10
x 10
x 10
x 10
x 10
Figure 1: Diversity mechanisms vs baseline (KSHcut). Precision on CIFAR dataset, as a function of
the training set size N (2, 000 to 20 000) and number of bits b (32 to 128). Ground truth: all points
with the same label as the query. Retrieved set: k = 500 nearest neighbors of the query. Errorbars
shown only for ILHt (over 5 random training sets) to avoid clutter. Top to bottom: linear and kernel
hash functions. Left to right: diversity mechanisms, their combination, and the baseline KSHcut.
4
4
4
4
4
Fig. 1 shows the results. The clearly best diversity mechanism is ILHt, which works better than
the other mechanisms, even when combined with them, and significantly better than KSHcut. We
explain this as follows. Although all 3 mechanisms introduce diversity, ILHt has a distinct advantage
(also over KSHcut): it effectively uses b times as much training data, because each hash function has
its own disjoint dataset. Using bN training points in KSHcut would be orders of magnitude slower.
ILHt is equal or even better than the combined ILHitf because 1) since there is already enough
diversity in ILHt, the extra diversity from ILHi and ILHf does not help; 2) ILHf uses less data (it
discards features), which can hurt the precision; this is also seen in fig. 2 (panel 2). The precision of
all methods saturates as N increases; with b = 128 bits, ILHt achieves nearly maximum precision
with only 5 000 points. In fact, if we continued to increase the per-bit training set size N in ILHt,
eventually all bits would use the same training set (containing all available data), diversity would
disappear and the precision would drop drastically to the precision of using a single bit (? 12%).
Practical image retrieval datasets are so large that this is unlikely to occur unless N is very large
(which would make the optimization too slow anyway).
Linear SVMs are very stable classifiers known to benefit less from ensembles than less stable classifiers such as decision trees or neural nets [22]. Remarkably, they strongly benefit from the ensemble
in our case. This is because each hash function is solving a different classification problem (different
output labels), so the resulting SVMs are in fact quite different from each other. The conclusions
for kernel hash functions are similar. In fig. 1, the kernel functions are using the same, common 500
centers for the radial basis functions. Nonlinear classifiers are less stable than linear ones. In our
case they do not benefit much more than linear SVMs from the diversity. They do achieve higher
precision since they are more powerful models. See supplementary material for more results.
Fig. 2 shows the results on infinite MNIST dataset (see supp. mat for the results on CIFAR). Panel 1
shows the results in ILHf of varying the number of features 1 ? d ? D used by each hash function.
Intuitively, very low d is bad because each classifier receives too little information and will make
near-random codes. Indeed, for low d the precision is comparable to that of LSH (random projections) in panel 4. Very high d will also work badly because it would eliminate the diversity and drop
to the precision of a single bit for d = D. This does not happen because there is an additional source
of diversity: the randomization in the alternating min-cut iterations. This has an effect similar to that
of ILHi, and indeed a comparable precision. The highest precision is achieved with a proportion
d/D ? 30% for ILHf, indicating some redundancy in the features. When combined with the other
diversity mechanisms (ILHitf, panel 2), the highest precision occurs for d = D, because diversity is
already provided by the other mechanisms, and using more data is better.
Fig. 2 (panel 3) shows the results of constructing the b training sets for ILHt as a random sample
from the base set such that they are ?bootstrapped? (sampled with replacement), ?disjoint? (sampled
without replacement) or ?random? (sampled without replacement but reset for each bit, so the training sets may overlap). As expected, ?disjoint? (closely followed by ?random?) is consistently and
notably better than ?bootstrap? because it introduces more independence between the hash functions
and learns from more data overall (since each hash function uses the same training set size).
5
ILHf
precision
80
ILHitf
80
ILHt: train set sampling
80
80
70
70
70
70
60
60
60
60
50
50
50
50
40
40
40
40
b = 32
30
b = 64
20
b = 128
30
20
10
0.01
0.2
0.4
0.6
0.8
10
1 0.01
b = 32
30
b = 64
20
b = 128
0.2
0.4
0.6
0.8
1
10
32
Incremental ILHt
ILHt
KSHcut?ILHt
KSHcut
tPCA
Bagged PCA
LSH
disjoint
30
random
20
bootstrap
64
128
10
0
40
80
120
160
200
d/D
d/D
number of bits b
number of bits b
Figure 2: Panels 1?2: effect of the proportion of features d/D used in ILHf and ILHitf. Panel
3: bootstrap vs random vs disjoint training sets in ILHt. Panel 4: precision as a function of the
number of hash functions b for different methods. All results show precision using a training set of
N = 5 000 points of infinite MNIST dataset. Errorbars over 5 random training sets. Ground truth:
all points with the same label as the query. Retrieved set: k = 10 000 nearest neighbors of the query.
Precision as a function of b Fig. 2 (panel 4) shows the precision (in the test set) as a function of
the number of bits b for ILHt, where the solution for b + 1 bits is obtained by adding a new bit to
the solution for b. Since the hash functions obtained depend on the order in which we add the bits,
we show 5 such orders (red curves). Remarkably, the precision increases nearly monotonically and
continues increasing beyond b = 200 bits (note the prediction error in bagging ensembles typically
levels off after around 25?50 decision trees; [22, p. 186]). This is (at least partly) because the
effective training set size is proportional to b. The variance in the precision decreases as b increases.
In contrast, for KSHcut the variance is larger and the precision barely increases after b = 80. The
higher variance for KSHcut is due to the fact that each b value involves training from scratch and we
can converge to a relatively different local optimum. As with ILHt, adding LSH random projections
(again 5 curves for different orders) increases precision monotonically, but can only reach a low
precision at best, since it lacks supervision. We also show the curve for thresholded PCA (tPCA),
whose precision tops at around b = 30 and decreases thereafter. A likely explanation is that highorder principal components essentially capture noise rather than signal, i.e., random variation in
the data, and this produces random codes for those bits, which destroy neighborhood information.
Bagging tPCA (here, using ensembles where each member has 16 principal components, i.e., 16
bits) [23] does make tPCA improve monotonically with b, but the result is still far from competitive.
The reason is the low diversity among the ensemble members, because the top principal components
can be accurately estimated even from small samples.
Is the precision gap between KSH and ILHt due to an incomplete optimization of the KSH objective,
or to bad local optima? We verified that 1) random perturbations of the KSHcut optimum lower
the precision; 2) optimizing KSHcut using the ILHt codes as initialization (?KSHcut-ILHt? curve)
increases the precision but it still remains far from that of ILHt. This confirms that the optimization
algorithm is doing its job, and that the ILHt diversity mechanism is superior to coupling the hash
functions in a joint objective.
Are the codes orthogonal? The result of learning binary hashing is b functions, represented by
a matrix Wb?D of real weights for linear SVMs, and a matrix ZN ?b of binary (?1, +1) codes for
the entire dataset. We define a measure of code orthogonality as follows. Define b ? b matrices
CZ = N1 ZT Z for the codes and CW = WWT for the weights (assuming normalized SVM
weights). Each C matrix has entries in [?1, 1], equal to a normalized dot product of codes or weight
vectors, and diagonal entries equal to 1. (Note that any matrix SCS where S is diagonal with ?1
entries is equivalent, since reverting a hash function?s output does not alter the Hamming distances.)
Perfect orthogonality happens when C = I, and is encouraged by many binary hashing methods.
Fig. 3 shows this for ILHt in CIFAR (N = 58 000 training points of dim. D = 320). It plots CZ as
an image, as well as the histogram of the entries of CZ and CW . The histograms also contain, as a
control, the histogram corresponding to normalized dot products of random vectors (of dimension N
or D, respectively), which is known to tend to a delta function at 0 as the dimension grows. Although
CW has some tendency to orthogonality as the number of bits b increases, it is clear that, for both
codes and weight vectors, the distribution of dot products is wide, far from strict orthogonality.
Hence, enforcing orthogonality does not seem necessary to achieve good hash functions and codes.
Comparison with other binary hashing methods We compare with both the original KSH [28]
and its min-cut optimization KSHcut [26], and a representative subset of affinity-based and unsupervised hashing methods: Supervised Binary Reconstructive Embeddings (BRE) [21], Supervised
Self-Taught Hashing (STH) [36], Spectral Hashing (SH) [35], Iterative Quantization (ITQ) [16], Bi6
b ? b matrix CZ
b = 32
b = 64
b = 200
1
0.4
0.2
32bits
64bits
128bits 0.15
200bits
random
0.1
0.2
0.05
0.8
0.8
0.6
0.6
0.4
0.2
32bits
64bits
128bits
200bits
random
0
?0.2
?0.4
?0.6
?0.8
?1
0
?1
?0.6
?0.2 0 0.2
0.6
1
entries (zT
n zm )/N of CZ
0
?1
?0.6
?0.2 0 0.2
0.6
T
entries wd
we of CW
1
Figure 3: Orthogonality of codes (b ? b images and left histogram) and of hash function weight
vectors (right histogram) in CIFAR.
nary Autoencoder (BA) [8], thresholded PCA (tPCA), and Locality-Sensitive Hashing (LSH) [1].
We create affinities ynm for all the affinity-based methods using the dataset labels. For each training point xn , we use as similar neighbors 100 points with the same labels as xn ; and as dissimilar
neighbors 100 points chosen randomly among the points whose labels are different from that of
xn . For all datasets, all the methods are trained using a subset of 5 000 points. Given that KSHcut
already performs well [26] and that ILHt consistently outperforms it both in precision and runtime,
we expect ILHt to be competitive with the state-of-the-art. Fig. 4 shows this is generally the case,
particularly as the number of bits b increases, when ILHt beats all other methods, which are not able
to increase precision as much as ILHt does.
Runtime Training a single ILHt hash function (in a single processor) for CIFAR dataset with
N = 2 000, 5 000 and 20 000 takes 1.2, 2.8 and 22.5 seconds, respectively. This is much faster
than other affinity-based hashing methods (for example, for 128 bits with 5 000 points, BRE did not
converge after 12 hours). KSHcut is among the faster methods. Its runtime per min-cut pass over
a single bit is comparable to ours, but it needs b sequential passes to complete just one alternating
optimization iteration, while our b functions can be trained in parallel.
Summary ILHt achieves a remarkably high precision compared to a coupled KSH objective using
the same optimization algorithm but introducing diversity by feeding different data to independent
hash functions rather than by jointly optimizing over them. It also compares well with state-of-theart methods in precision/recall, being competitive if few bits are used and the clear winner as more
bits are used, and is very fast and embarrassingly parallel.
4 Discussion
We have revealed for the first time a connection between supervised binary hashing and ensemble
learning that could open the door to many new hashing algorithms. Although we have focused on
a specific objective and identified as particularly successful with it a specific diversity mechanism
(disjoint training sets), other choices may be better depending on the application. The core idea we
propose is the independent training of the hash functions via the introduction of diversity by means
other than coupling terms in the objective or constraints. This may come as a surprise in the area
of learning binary hashing, where most work has focused on proposing complex objective functions
that couple all b hash functions and developing sophisticated optimization algorithms for them.
Another surprise is that orthogonality of the codes or hash functions seems unnecessary. ILHt creates
codes and hash functions that do differ from each other but are far from being orthogonal, yet they
achieve good precision that keeps growing as we add bits. Thus, introducing diversity through
different training data seems a better mechanism to make hash functions differ than coupling the
codes through an orthogonality constraint or otherwise. It is also far simpler and faster to train
independent single-bit hash functions.
A final surprise is that the wide variety of affinity-based objective functions in the b-bit case reduces
to a binary quadratic problem in the 1-bit case regardless of the form of the b-bit objective (as long
as it depends on Hamming distances only). In this sense, there is a unique objective in the 1-bit case.
There has been a prior attempt to use bagging (bootstrapped samples) with truncated PCA [23]. Our
experiments show that, while this improves truncated PCA, it performs poorly in supervised hashing.
This is because PCA is unsupervised and does not use the user-provided similarity information,
which may disagree with Euclidean distances in image space; and because estimating principal
components from samples has low diversity. Also, PCA is computationally simple and there is little
gain by bagging it, unlike the far more difficult optimization of supervised binary hashing.
Some supervised binary hashing work [28, 34] has proposed to learn the b hash functions sequentially, where the ith function has an orthogonality-like constraint to force it to differ from the previ7
Inf. MNIST
precision
CIFAR
precision
b = 64
ILHt
KSHcut
KSH
STH
CCA?ITQ
SH
LSH
BRE
45
40
35
30
25
20
500
80
600
700
70
60
50
40
5000
6000
7000
800
900
1000
ILHt
KSHcut
KSH
STH
CCA?ITQ
SH
LSH
BRE
8000 9000 10000
b = 64
45
30
20
10
b = 128
ILHt
KSHcut
KSH
STH
CCA?ITQ
SH
LSH
BRE
40
20
40
60
90
80
70
60
50
40
30
20
10
20
40
60
35
30
30
20
25
600
700
800
900
1000
10
20
40
60
80
100
20
40
60
80
100
90
80
70
60
50
40
30
20
10
70
60
50
40
100 5000
80
40
40
20
100 500
80
80
ILHt
KSHcut
KSH
STH
CCA?ITQ
SH
LSH
BRE
b = 128
45
45
6000
7000
8000
9000 10000
recall
recall
k
k
Figure 4: Comparison with binary hashing methods in precision and precision/recall, using linear
SVMs as hash functions and different numbers of bits b, for CIFAR and Inf. MNIST.
ous functions. Hence, this does not learn the functions independently and can be seen as a greedy
optimization of a joint objective over all b functions.
Binary hashing does differ from ensemble learning in one important point: the predictions of the b
classifiers (= b hash functions) are not combined into a single prediction, but are instead concatenated
into a binary vector (which can take 2b possible values). The ?labels? (the binary codes) for the
?classifiers? (the hash functions) are unknown, and are implicitly or explicitly learned together with
the hash functions themselves. This means that well-known error decompositions such as the errorambiguity decomposition [20] and the bias-variance decomposition [15] do not apply. Also, the real
goal of binary hashing is to do well in information retrieval measures such as precision and recall,
but hash functions do not directly optimize this. A theoretical understanding of why diversity helps
in learning binary hashing is an important topic of future work.
In this respect, there is also a relation with error-correcting output codes (ECOC) [11], an approach
for multiclass classification. In ECOC, we represent each of the K classes with a b-bit binary vector,
ensuring that b is large enough for the vectors to be sufficiently separated in Hamming distance. Each
bit corresponds to partitioning the K classes into two groups. We then train b binary classifiers, such
as decision trees. Given a test pattern, we output as class label the one closest in Hamming distance
to the b-bit output of the b classifiers. The redundant error-correcting codes allow for small errors in
the individual classifiers and can improve performance. An ECOC can also be seen as an ensemble
of classifiers where we manipulate the output targets (rather than the input features or training set)
to obtain each classifier, and we apply majority vote on the final result (if the test output in classifier
i is 1, then all classes associated with 1 get a vote). The main benefit of ECOC seems to be in
variance reduction, as in other ensemble methods. Binary hashing can be seen as an ECOC with
N classes, one per training point, with the ECOC prediction for a test pattern (query) being the
nearest-neighbor class codes in Hamming distance. However, unlike in ECOC, in binary hashing
the codes are learned so they preserve neighborhood relations between training points. Also, while
ideally all N codes should be different (since a collision makes two originally different patterns
indistinguishable, which will degrade some searches), this is not guaranteed in binary hashing.
5 Conclusion
Much work in supervised binary hashing has focused on designing sophisticated objectives of the
hash functions that force them to compete with each other while trying to preserve neighborhood
information. We have shown, surprisingly, that training hash functions independently is not just simpler, faster and parallel, but also can achieve better retrieval quality, as long as diversity is introduced
into each hash function?s objective function. This establishes a connection with ensemble learning
and allows one to borrow techniques from it. We showed that having each hash function optimize a
Laplacian objective on a disjoint subset of the data works well, and facilitates selecting the number
of bits to use. Although our evidence is mostly empirical, the intuition behind it is sound and in
agreement with the many results (also mostly empirical) showing the power of ensemble classifiers.
The ensemble learning perspective suggests many ideas for future work, such as pruning a large
ensemble or using other diversity techniques. It may also be possible to characterize theoretically
the performance in precision of binary hashing depending on the diversity of the hash functions.
Acknowledgments
Work supported by NSF award IIS?1423515.
8
References
[1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high
dimensions. Comm. ACM, 51(1):117?122, Jan. 2008.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, June 2003.
[3] E. Boros and P. L. Hammer. Pseudo-boolean optimization. Discrete Applied Math., Nov. 15 2002.
[4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2001.
[5] L. Breiman. Random forests. Machine Learning, 45(1):5?32, Oct. 2001.
[6] L. J. Breiman. Bagging predictors. Machine Learning, 24(2):123?140, Aug. 1996.
? Carreira-Perpi?na? n. The elastic embedding algorithm for dimensionality reduction. ICML 2010.
[7] M. A.
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
? Carreira-Perpi?na? n and R. Raziperchikolaei. Hashing with binary autoencoders. CVPR, 2015.
M. A.
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, 2009.
T. G. Dietterich. Ensemble methods in machine learning. Springer-Verlag, 2000.
T. G. Dietterich and G. Bakiri. Solving multi-class learning problems via error-correcting output codes. J.
Artificial Intelligence Research, 2:253?286, 1995.
R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. J. Machine Learning Research, 9:1871?1874, Aug. 2008.
M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness.
W.H. Freeman, 1979.
T. Ge, K. He, and J. Sun. Graph cuts for supervised binary coding. ECCV, 2014.
S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural
Computation, 4(1):1?58, Jan. 1992.
Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A Procrustean approach to
learning binary codes for large-scale image retrieval. PAMI, 2013.
K. Grauman and R. Fergus. Learning binary hash codes for large-scale image search. In Machine Learning
for Computer Vision, pages 49?87. Springer-Verlag, 2013.
V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? PAMI, 2003.
A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, Apr. 8 2009.
A. Krogh and J. Vedelsby. Neural network ensembles, cross validation, and active learning. NIPS, 1995.
B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009.
L. I. Kuncheva. Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, 2014.
C. Leng, J. Cheng, T. Yuan, X. Bai, and H. Lu. Learning binary codes with bagging PCA. ECML, 2014.
B. Lin, J. Yang, X. He, and J. Ye. Geodesic distance function learning via heat flows on vector fields.
ICML, 2014.
G. Lin, C. Shen, D. Suter, and A. van den Hengel. A general two-step approach to learning-based hashing.
ICCV, 2013.
G. Lin, C. Shen, Q. Shi, A. van den Hengel, and D. Suter. Fast supervised hashing with decision trees for
high-dimensional data. CVPR, 2014.
W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. ICML, 2011.
W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. CVPR, 2012.
G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines using selective sampling.
In Large Scale Kernel Machines, Neural Information Processing Series, pages 301?320. MIT Press, 2007.
A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. Int. J. Computer Vision, 42(3):145?175, May 2001.
? Carreira-Perpi?na? n. Optimizing affinity-based binary hashing using auxilR. Raziperchikolaei and M. A.
iary coordinates. NIPS, 2016.
S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290(5500):2323?2326, Dec. 22 2000.
G. Shakhnarovich, P. Indyk, and T. Darrell, editors. Nearest-Neighbor Methods in Learning and Vision.
Neural Information Processing Series. MIT Press, Cambridge, MA, 2006.
J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for large scale search. PAMI, 2012.
Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. NIPS, 2009.
D. Zhang, J. Wang, D. Cai, and J. Lu. Self-taught hashing for fast similarity search. SIGIR, 2010.
9
| 6289 |@word kulis:1 polynomial:3 proportion:2 seems:3 disk:1 open:1 confirms:1 perpin:1 bn:6 decomposition:3 hsieh:1 liblinear:2 bai:1 liu:2 contains:3 series:2 selecting:2 reduction:5 initial:1 document:1 bootstrapped:3 ours:1 outperforms:2 wd:1 yet:1 written:3 john:1 partition:1 happen:1 shape:1 designed:1 gist:1 update:1 drop:2 hash:68 v:3 plot:1 greedy:1 device:1 intelligence:1 ith:1 core:1 iterates:1 math:1 completeness:1 simpler:4 zhang:1 mathematical:1 become:1 ik:1 yuan:1 fitting:2 introduce:3 theoretically:1 pairwise:4 notably:1 expected:1 indeed:3 znb:1 themselves:1 growing:1 multi:1 ecoc:7 inspired:2 freeman:1 little:2 enumeration:1 increasing:1 becomes:1 spain:1 provided:2 estimating:1 rivest:1 panel:9 what:1 developed:1 proposing:1 impractical:1 pseudo:1 runtime:6 grauman:1 classifier:35 k2:3 control:2 unit:1 partitioning:1 positive:4 local:3 limit:2 jiang:1 approximately:1 pami:4 initialization:3 equivalence:1 ucmerced:2 relaxing:1 suggests:1 limited:2 range:1 practical:2 unique:1 acknowledgment:1 block:1 bootstrap:4 digit:1 jan:2 area:1 empirical:2 significantly:1 projection:3 radial:1 get:2 onto:1 superlinear:1 cannot:1 selection:1 applying:1 optimize:8 equivalent:3 center:1 shi:1 attention:1 regardless:1 independently:5 focused:4 shen:2 sigir:1 correcting:3 continued:1 nesting:2 borrow:1 embedding:4 searching:2 anyway:1 coordinate:2 hurt:1 variation:1 target:1 user:1 us:4 designing:2 agreement:1 element:2 approximated:2 particularly:2 continues:1 merced:2 cut:13 database:5 labeled:1 bottom:1 geman:1 loosli:1 solved:1 capture:1 wang:5 thousand:3 sun:1 decrease:3 highest:2 intuition:1 comm:1 ideally:1 geodesic:1 highorder:1 trained:5 depend:1 solving:3 shakhnarovich:1 dilemma:1 creates:1 learner:1 basis:2 joint:2 differently:1 represented:1 kolmogorov:1 ilh:4 train:10 separated:1 distinct:2 fast:8 effective:1 reconstructive:3 heat:1 query:14 sc:1 artificial:1 neighborhood:8 whose:5 quite:1 larger:2 valued:1 supplementary:3 solve:1 cvpr:3 otherwise:2 niyogi:1 jointly:2 final:2 indyk:2 advantage:3 net:4 cai:1 propose:3 interaction:1 product:4 reset:1 zm:18 relevant:1 combining:1 holistic:1 poorly:1 achieve:4 roweis:1 az:1 billion:1 optimum:7 darrell:2 produce:2 incremental:1 perfect:1 help:2 coupling:4 depending:2 gong:1 miguel:1 nearest:11 received:1 aug:2 job:1 krogh:1 auxiliary:1 implemented:1 involves:2 indicate:1 itq:5 come:1 differ:6 closely:1 hammer:1 centered:1 material:3 feeding:1 randomization:1 extension:1 around:2 sufficiently:1 ground:4 mapping:1 stabilizes:1 gordo:1 achieves:3 optimizer:1 torralba:2 injecting:2 applicable:1 label:10 sensitive:2 create:1 establishes:1 minimization:1 mit:3 clearly:1 sight:1 rather:5 pn:3 avoid:2 breiman:2 varying:1 encode:1 focus:3 june:1 consistently:2 contrast:1 raziperchikolaei:3 baseline:3 sense:1 dim:1 dependent:1 perronnin:1 squaring:1 entire:3 eliminate:2 unlikely:1 typically:1 hidden:1 relation:2 zn1:1 bienenstock:1 selective:1 pixel:1 issue:1 among:5 classification:3 overall:1 art:2 spatial:1 bagged:1 equal:5 construct:1 eigenproblem:1 having:3 field:2 sampling:2 runtimes:1 identical:4 encouraged:1 unsupervised:2 nearly:2 bnm:1 theart:1 alter:1 future:2 minimized:1 nonsmooth:1 others:1 np:3 report:3 few:4 belkin:1 suter:2 randomly:1 preserve:2 individual:1 floating:2 replacement:4 n1:1 attempt:1 leiserson:1 introduces:1 sh:5 behind:1 ynm:15 encourage:2 necessary:3 orthogonal:4 unless:2 tree:7 incomplete:1 euclidean:3 initialized:2 deformation:1 theoretical:1 complicates:1 earlier:1 downside:1 wb:1 boolean:1 modeling:1 zn:19 cost:2 introducing:3 subset:10 entry:6 veksler:1 predictor:1 krizhevsky:1 successful:1 eigenmaps:4 johnson:1 too:2 characterize:1 eec:2 combined:5 randomized:3 off:1 together:1 na:3 again:1 thesis:1 containing:2 possibly:1 inject:1 supp:1 potential:1 diversity:33 coding:1 coefficient:1 int:1 explicitly:1 depends:4 performed:1 try:1 later:2 doing:1 red:1 competitive:3 option:1 parallel:6 variance:6 ensemble:24 ztn:1 handwritten:1 raw:1 accurately:1 lu:2 iary:1 randomness:1 processor:3 zn2:1 explain:1 parallelizable:1 reach:2 energy:2 obvious:2 garey:1 vedelsby:1 associated:1 hamming:12 couple:4 gain:2 nonpositive:2 dataset:11 sampled:4 recall:10 icml:3 improves:3 dimensionality:4 embarrassingly:1 bre:11 sophisticated:2 wwt:1 hashing:48 higher:2 supervised:15 originally:1 wei:1 done:3 though:1 strongly:1 generality:1 just:4 until:2 autoencoders:1 receives:1 nonlinear:3 lack:1 defines:1 quality:1 indicated:1 grows:1 effect:3 dietterich:2 normalized:4 true:2 contain:2 ye:1 hence:3 kxn:1 alternating:8 symmetric:1 indistinguishable:1 during:1 anm:1 self:2 prominent:1 trying:1 procrustean:1 complete:3 performs:2 image:19 lazebnik:1 recently:1 boykov:1 common:1 rotation:1 superior:1 ji:1 winner:1 he:2 cambridge:1 rd:2 trivially:1 canu:1 submodular:3 dot:4 lsh:8 stable:3 similarity:3 supervision:1 etc:1 base:3 add:3 closest:1 own:1 recent:1 showed:1 retrieved:3 optimizing:6 inf:2 perspective:1 discard:1 verlag:2 binary:58 ous:1 seen:4 minimum:1 additional:2 somewhat:1 parallelized:1 converge:2 maximize:1 redundant:2 monotonically:3 signal:1 semi:2 relates:1 ii:1 sound:1 multiple:1 reduces:1 faster:9 offer:1 cross:2 retrieval:9 lin:5 cifar:8 long:2 manipulate:1 graphcut:2 award:1 laplacian:16 ensuring:1 prediction:5 involving:1 oliva:1 essentially:4 vision:3 iteration:4 kernel:9 represent:2 cz:5 histogram:5 achieved:1 dec:1 want:1 separately:1 remarkably:3 source:1 extra:1 nary:1 unlike:4 doursat:1 envelope:1 strict:1 pass:1 subject:1 tend:1 facilitates:1 member:2 flow:3 seem:2 near:2 yang:1 noting:1 door:1 split:1 embeddings:3 easy:1 enough:2 revealed:1 independence:1 fit:3 variety:1 architecture:1 identified:2 perfectly:1 idea:5 multiclass:1 whether:1 pca:9 gb:1 penalty:5 afford:1 boros:1 generally:6 collision:1 clear:2 clutter:1 stein:1 locally:2 hardware:1 svms:8 zabih:2 exist:1 nsf:1 estimated:1 disjoint:9 per:3 delta:1 discrete:2 hyperparameter:1 mat:1 taught:2 group:4 redundancy:1 thereafter:1 verified:1 thresholded:2 destroy:1 graph:5 sum:2 compete:1 powerful:1 master:1 decision:7 comparable:3 bit:69 layer:2 cca:4 followed:1 guaranteed:1 cheng:1 fan:1 quadratic:6 nonnegative:1 badly:1 occur:1 constraint:9 orthogonality:9 scene:1 generates:2 min:10 kumar:2 relatively:1 speedup:2 developing:1 combination:3 cormen:1 across:1 smaller:2 describes:1 son:1 sth:5 happens:2 ilht:35 intuitively:1 restricted:1 den:2 iccv:1 invariant:1 computationally:2 remains:1 discus:2 eventually:1 mechanism:16 reverting:1 ge:1 available:2 operation:1 apply:3 spectral:3 appropriate:1 robustness:1 altogether:1 slower:1 original:6 bagging:7 top:3 running:1 ensure:1 include:1 fnm:2 ramin:1 concatenated:1 disappear:1 bakiri:1 kuncheva:1 objective:38 already:4 occurs:1 diagonal:2 said:1 exhibit:2 affinity:10 distance:17 separate:1 mapped:1 cw:4 landmark:1 majority:2 degrade:1 topic:1 manifold:1 unstable:1 trivial:2 binarize:1 barely:1 reason:1 enforcing:1 assuming:1 code:40 minimizing:1 difficult:2 mostly:2 sne:1 potentially:1 negative:5 ba:1 zt:4 unknown:1 disagree:2 b2n:1 datasets:7 markov:1 minh:2 ecml:1 beat:1 truncated:2 saturates:1 perturbation:1 arbitrary:1 introduced:1 pair:1 z1:2 connection:4 optimized:1 california:2 errorbars:2 learned:2 barcelona:1 hour:1 nip:5 beyond:2 able:1 usually:2 parallelism:1 xm:6 pattern:4 tb:1 max:2 memory:1 explanation:1 power:1 overlap:2 difficulty:1 force:2 improve:2 library:1 coupled:6 autoencoder:1 prior:1 literature:3 understanding:1 asymptotic:1 loss:9 expect:1 proportional:1 validation:2 editor:1 intractability:1 tiny:1 eccv:1 summary:1 surprisingly:2 supported:1 truncation:1 drastically:1 bias:2 allow:1 understand:1 guide:1 neighbor:21 wide:2 saul:1 sparse:1 benefit:5 van:2 curve:4 dimension:5 xn:11 hengel:2 reside:1 avoided:2 leng:1 far:7 approximate:6 pruning:1 ignore:1 implicitly:2 nov:1 keep:3 splh:1 global:3 sequentially:1 anchor:1 active:1 unnecessary:1 fergus:2 search:9 iterative:2 why:1 nature:2 learn:3 expanding:1 elastic:3 inherently:1 forest:2 bottou:1 complex:2 constructing:2 did:1 apr:1 dense:1 main:1 noise:2 nothing:1 x1:1 fig:8 representative:1 fashion:1 slow:2 wiley:1 precision:46 exceeding:1 kzt:1 perpinan:1 minz:1 third:1 learns:1 bad:2 specific:2 perpi:3 sift:1 showing:1 kzn:4 svm:3 evidence:2 mnist:6 quantization:2 sequential:3 adding:5 effectively:1 andoni:1 magnitude:2 gap:1 easier:1 surprise:3 locality:1 lap:2 simply:1 likely:1 chang:4 springer:2 corresponds:1 truth:4 acm:1 ma:1 oct:1 goal:1 ksh:17 change:1 carreira:4 specifically:2 typical:1 infinite:3 preset:1 principal:4 pas:1 partly:1 tendency:1 vote:3 indicating:1 support:2 arises:2 dissimilar:4 evaluate:2 scratch:2 |
5,846 | 629 | Hidden Markov Models in Molecular
Biology: New Algorithms and
Applications
Yves Chauvin t
Net-ID, Inc.
8, Cathy Place
Menlo Park, CA 94305
Pierre Baldi ?
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
Tim H lmkapiller
Division of Biology
California Institute of Technology
Marcella A. McClure
Department of Evolutionary Biology
University of California, Irvine
Abstract
Hidden Markov Models (HMMs) can be applied to several important problems in molecular biology. We introduce a new convergent
learning algorithm for HMMs that, unlike the classical Baum-Welch
algorithm is smooth and can be applied on-line or in batch mode,
with or without the usual Viterbi most likely path approximation.
Left-right HMMs with insertion and deletion states are then trained
to represent several protein families including immunoglobulins and
kinases. In all cases, the models derived capture all the important
statistical properties of the families and can be used efficiently in
a number of important tasks such as multiple alignment, motif detection, and classification.
*and Division of Biology, California Institute of Technology.
t and Department of Psychology, Stanford University.
747
748
Baldi, Chauvin, Hunkapiller, and McClure
1
INTRODUCTION
Hidden Markov Models (e.g., Rabiner, 1989) and the more general EM algorithm in
statistics can be applied to the modeling and analysis of biological primary sequence
information (Churchill (1989), Lawrence and Reilly (1990), Baldi et al. (1992),
Cardon and Storrr..o (1992), H~.usslel et al. (1992?. Most not.ably, as in speech
recognition applications, a family of evolutionarily related sequences can be viewed
as consisting of different utterances of the same prototypical sequence resulting from
a common under1ying HMM dynamics. A model trained from a family can then be
used for a number of tasks including multiple alignments and classification. The
multiple alignment is particularly important since it reveals the highly conserved
regions of the molecules with functional and structural significance even in the
absence of any tertiary information. The mUltiple alignment is also an essential
tool for proper phylogenetic tree reconstruction and other important tasks. Good
algorithms based on dynamic programming exist for the alignment of two sequences.
However they scale exponentially with the number of sequences and the general
multiple alignment problem is known to be NP-complete. Here, we briefly present
a new algorithm and its variations for learning in HMMs and the results of some of
the applications of this approach to new protein families.
2
HMMs FOR BIOLOGICAL PRIMARY SEQUENCES
A HMM is characterized by a set of states, an alphabet of symbols, a probability
transition matrix T
(tij) and a probability emission matrix eij. As in speech
applications, we are going to consider left-right architectures: once a given state is
left it can never be visited again. Common knowledge of evolutionary mechanisms
suggests the choice of three types of states (in addition to the start and to the
end state): the main states m}, ... , mN, the delete states d}, ... , dN+l and the insert
states iI, ... , iN+l' N is the length of the model which is usually chosen equal to the
average length of the sequences in the family and, if needed, can be adjusted in later
stages. The details of a typical architecture are given in Figure 1. The alphabet
has 4 letters in the case of DNA or RNA sequences, one symbol per nucleotide,
and 20 letters in the case of proteins, one symbol per amino acid. Only the main
and insert states emit letters, while the delete states are of course mute. The
linear sequence of state transitions start - ml - m2 - ... - mN - end is the
backbone of the model and correponds to the path associated with the prototypical
sequence in the family under consideration. Insertions and deletions are defined
with respect to this backbone. Insertions and deletions are treated symmetrically
except for the loops on the insert states needed to account for multiple insertions.
The adjustable parameters of the HMM provide a natural way of incorporating
variable gap penalties. A number of other architectures are also possible.
=
3
LEARNING ALGORITHMS
Learning from examples in HMMs is typically accomplished using the Baum-Welch
algorithm. In the Baum-Welch algorithm, the expected number nij (resp. mij)
of i - j transitions (resp. emissions of letter j from state i) induced by the data
are calculated using the forward-backward procedure. The transition and emission
Hidden Markov Models in Molecular Biology: New Algorithms and Applications
Figure 1: The basic left-right HMM architecture. Sand E are the start and end
states.
probabilities are then reset to the observed frequencies by
t +.. -_
IJ
=
nij and +
eiJ?
ni
mij
= -mi
(1)
=
where ni
E j nij and m, E j mij. It is clear that this algorithm can lead to
abrupt jumps in parameter space and that the procedure cannot be used for online learning (after each training example). This is even more so if, in order to
save some computations, the Viterbi approximation is used to estimate likelihoods
and transition and emission statistics by computing only the most likely paths as
opposed to the forward-bacward procedure where all possible paths are examined.
A new algorithm for HMM learning which is smooth and can be used on-line or in
batch mode, with or without the Viterbi approximation, can be defined as follows.
First, we use a Boltzmann-Gibbs representation for the parameters. For each tij
(resp. eij) we define a new parameter Wij (resp. Vii) by
(2)
Normalisation constraints are naturally enforced by this representation throughout
learning with the added advantage that none of the parameters can reach the absorbing value O. After computing on-line or in batch mode the statistics nij and
mii using the forward-backward procedure (or the usual Viterbi approximation),
the update equations are particularly simple and given by
n??
m??
~Wij = 1}(....!L - tij) and ~Vij = 1}( ---2L - eii)
(3)
ni
mi
where 1} is the learning rate. In Baldi et al. (1992) a proof is given that this
algorithm must converge to a maximum of the product of the likelihoods of the
training sequences. In the case of an on-line Viterbi approximation, the optimal
path associated with the current training sequence is first computed. The update
equations are then given by
~Wij
= 1}(t:i -
tii) and ~vii
= 1}(tii -
eii)
(4)
749
750
Baldi, Chauvin, Hunkapiller, and McClure
Here, for a fixed state i, t: j and t1j are the target transition and emission values: t~j = 1 every time the transition Si -.. Sj is part of the Viterbi path of the
corresponding training sequence sequence and 0 otherwise and similarly for tij'
After training, the model derived can be used for a number of tasks. First, by
computing for each sequence its most likely path through the model using the
Viterbi algorithm, multiple sequences can be aligned to each other in time O(K N 2 ),
linear in the number K of sequences. The model can also be used for classification
and data base searches. The likelihood of any sequence (randomly generated or
taken from any data base) can be calc?.llated and compared to the likelihood of the
sequences in the family being modeled. Additional applications are discussed in
Baldi et al. (1992).
4
EXPERIMENTS AND RESULTS
The previous approach has been applied to a number of protein families including
globins, immunoglobulins, kinases, aspartic acid proteases and G-coupled receptor
proteins. The first application and alignment of the globin family using HMMs
(trained with the Viterbi approximation of the Baum-Welch algorithm, and a number of additional heuristics) was given by Haussler et al. (1992). Here, we briefly
describe some of our results on the immunoglobulin and the kinase families 1.
4.1
IMMUNOGLOBULINS
Immunoglobulins or antibodies are proteins produced by B cells that bind with
specificity to foreign antigens in order to neutralize them or target their destruction
by other effector cells (e.g., Hunkapiller & Hood, 1989). The set of sequences used in
our experiments consists of immunoglobulins V region sequences from the Protein
Identification Resources (PIR) data base. It corresponds to 294 sequences, with
minimum length 90, average length 117 and maximum length 254. The variation in
length resulted from including any sequence with a V region, including those that
also included signal or leader sequences, germline sequences that did not include the
J segment, and some that contained the C region as well. Seventy seqences contained
one or more special characters indicating an ambiguous amino acid determination
and were removed.
For the immunoglobulins variable V regions, we have trained a model of length 117
using a random subset of 150 sequences. Figure 2 displays the alignment corresponding to the first 20 sequences in this random subset. Letters emitted from the
main states are upper case and letters emitted from insertion states are lower case.
Dashes represent deletions or accomodate for insertions. As can be observed, the
algorithm has been able to detect all the main regions of highly conserved residues.
Most importantly, the cysteine residues towards the beginning and the end responsible for the disulphide bonds which holds the chains together are perfectly aligned
and marked. The only exception is the fifth sequence from the bottom which has
a serine residue in its terminal portion. It is also important to remark that some
1 Recently, Hausssler et al. have also independently applied their approach to the kinase
family (Haussler, private communication).
Hidden Markov Models in Molecular Biology: New Algorithms and Applications
PHOI06
B27563
MHMS76
D28035
D24672
PHOIOO
B27888
PL0160
E28833
D30539
C30560
AVMSX4
C30540
PL0123
H36005
PH0097
137267
A25114
D2HUWA
A30539
mklpvrllvlmfwipasssDvVMTQTPLSLpvSLGDQASISCRSSQSLVHSngnTYLNWYLQ--KAGQS----------------------LQQPGAELv-KPGASVKLSCKASGYTFTN---YWIHWVKQ--RPGRGL
------------------------ESGGGLv-QPGGSMKLSCVASGFTFSN---YWMNWVRQ--SPEKGL
mefglswiflvailkgvqcEvRLVESGGDLv-EPGGSLRVSCEVSGFIFSK---AWMNWVRQ--APGKGL
---------------------------------------ISCKASGYTFTN---YGMNWVKQ--APGKGL
-------------------LvQLQQSGPVLv-KPGTSMKISCKTSGYSFTG---YTMSWVRQ--SHGKSL
-------------------EvMLVESGGGLa-KPGGSLKLSCTTSGFTFS1---HAMSWVRQ--TPEKRL
-------------------QvQLQQSGPGLv-KPSQTLSLTCA1SGDSVSSns-AAWNW1RQ--SPSRGL
-------------------DvVMTQTPLSLpvSLGDQASISCRSSQSLVRSngnTYLHWYLQ--KPGQP-------------------EvKLVESGGGLv-QSGGSLRLSCATSGFTFSD---FYMEWVRQ--PPGKSL
-------------------QvHLQQSGAELv-KPGASVK1SCKASGYTFTS---YWMNWVKQ--RPGQGL
-------------------EvKLLESGGGLv-QPGGSLKLSCAASGFDFSR---YWMSWVRQ--APGKGL
-------------------EvKLVESGGGLv-QPGGSLRLSCATSGFTFSD---FYMEWVRQ--PPGKRL
-------------------EvQLVESGGGLv-QPGGSLRLSCAASGFTFSS---YWMSWVRQ--APGKGL
-------------------EvQLVESGGGLv-KPGGSLRLSCAASGFTFSN---AWMNWVRQ--APGKGL
-------------------DvKLVESGGGLv-KPGGSLKLSCAASGFTFSS---Y1MSWVRQ--TPEKRL
gsimg---------------vQLQQSGPELv-KPGASVKISCKTSGYTFTE---YTMHWVKQ--SHGKSL
-------------------DvHLQESGPGLv-KPSQSLSLTCSVTGYS1TRg--YNWNW1RR--FPGNKL
-------------------RIQLQESGPGLv-KPSETLSLTCIVSGGPIRRtg-YYWGWIRQ--PPGKGL
-------------------EvKLVESGGGLv-QPGGSLRLSCATSGFTFSD---FYMEWVRQ--PPGKRL
PHOI06
B27563
MHMS76
D28035
D24672
PHOIOO
B27888
PL0160
E28833
D30539
C30560
AVMSX4
C30540
PL0123
H36005
PH0097
137267
A25114
D2HUWA
A30539
-p-KLLI-YKV---SNR-FSGVPDRFSGSG--SGTDFTLKI SRVEAEDLG IYFCSQ-------------E-W1GR1-DPNSGGTKY-NEKFKNKATLT1NKPSNTAYMQLSSLTSDDSAVYYCARGYDYSYY------E-WVAE1rLKSGYATHY-AESVKGRFT1SRDDSKSSVYLQMNNLRAEDTG1YYCTRPGV----------Q-WVGQ I kNKVDGGTIDYAAPVKGRF I ISRDDSKSTVYLQMNRLK1EDTAVYYCVGNYTGT--------K-WMGW1-NTYTGEPTY-ADDFKGRFAFSLETSASTAYLQ1NNLKNEDTATYFCARGSSYDYY------E-W1GL1-1PSNGGTNY-NQKFKDKASLTVDKSSSTAYMELLSLTSEDSAVYYCARPSYYGSRnyy---E-WVAA1-SSGGSYTFY-PDSVKGRFT1SRDNAKNTLYLQ1NSLRSEDTAIYYCAREEGLRLDdy----E-WLGRT-YYRSKWYNDYAVSVKSRITINPDTSKNQFSLQLNSVTPEDTAVYYCARELGDA---------p-KLLI-YKV---SNR-VSGVPDRFSGSG--SGTDFTLK1SRVEAEDLGVYFCSQSTHV---------E-WlAASrNEANDYTTEYSASVKGRFIVSRDTSQSILYLQM1ALRAEDTAIYYCSRDYYGSSYw-----E-W1GEI-DPSNSYTNN-NQKFKNKATLTVDKSSNTAYMQLSSLTSEDSAVYYCARWGTGSSWg-----E-W1GE1-NPDSST1NY-TPSLKDKFIISRDNAKNTLYLQMSKVRSEDTALYYCARLHYYGY-------E-WlAASrNKAHDYTTEYSASVKGRFIVSRDTSQSI LYLQMNALRAEDTA1YYCARDADYGSSshw---E-WVAN1-KQDGSEKYY-VDSVKGRFT1SRDNAKNSLYLQMNSLRAEDTAVYYCAR-------------E-WVGRlkSKTDGGTTDYAAPVKGRFT1SRDDSKNTLYLQMNSLKTEDTAVYYCTTDRGGSSQ------E-WVAT1-SSGGRYTYY-SDSVKGRFT1SRDNAKNTLYLQMSSLRSEDTAMYYSTASGDS---------E-W1GGI-NPNNGGTSY-NQKFKGKATLTVDKSSSTAYMELRSLTSEDSAVYYCARRGLTTVVaksy--E-WMGY1-NYDGS-NNY-NPSLKNR1SVTRDTSKNQFFLKMNSVTTEDTATYYCARL1PFSDGyyedyyE-W1GGV-YYTGS-1YY-NPSLRGRVT1SVDTSRNQFSLNLRSMSAADTAMYYCARGNPPPYYdigtgsd
E-W1AAS rNKANDYTTEYSASVKGRF 1VSRDTSQS I LYLQMNALRAEDTA1YYCARDYYGSSYvw -----
......?............?....?............?............? * ...................?........
*
PHOI06
B27563
MHMS76
D28035
D24672
PHOIOO
B27888
PL0160
E28833
D30539
C30560
AVMSX4
C30540
PL0123
H36005
PH0097
137267
A25114
D2HUWA
A30539
----------------tthvpptfgggtkleikr-AMDYWGQGTSVTVSS--------------------PDYWGQGTTLTVSS--------------------VDYWGQGTLVTVSS-------------------AMDYWGQGTSVTVSS-------------------AMDYWGQGTSVTVSSak-----------------AMDYWGQGTSVTVS---------------------FD1WGQGTMVTVSS-------------------YFDVWGAGTTVTVSS-------------------WFAYWGQGTLVTVSA--------------------AAYWGQGTLVTVSAe------------------yFDVWGAGTTVTVSS--------------------GDYWGQGTLVTVSS--------------------FDYWGQGTTLTVSSak-----------------yFDYWGQGTTLTVSS-------------------AMDYWGQGT------------------------dG1DVWGQGTTVHVSS-------------------YFDVWGAGTTVTVSS-------------------
Figure 2: Immunoglobulin alignment.
751
752
Baldi, Chauvin, HunkapiIler, and McClure
of the sequences in the family have some sort of "header" (leader signal peptide)
whereas the others do not. We did not remove the headers prior to training and
used the sequences as they were given to us. The model was able to detect and
accomodate these "headers" by treating them as initial inserts as can be seen from
the alignment of two of the sequences.
4.2
KINASES
Eukaryotic protein k~nases cO"lstitute a very large family of proteins that regulate the most basic of cellular processes through phosphorylation. They have been
termed the "transistors" of the cell (Hunter (1987)). We have used the sequences
available in the kinase data base maintained at the Salk Institute. Our basic set
consists of 224 sequences, with minimum length 156, average length 287, and maximallength 569. Only one sequence containing a special symbol (X) was discarded.
In one experiment, we trained a model of length 287 using a random subset of
150 kinase sequences. Figure 3 displays the corresponding alignment for a subset
of 12 phylogenetically representative sequences. These include serine/threonine,
tyrosine and dual specificity kinases from mammals, birds, fungi and retroviruses
and herpes viruses. The percentage of identical residues within the kinase data
sets ranges from 8-30%, suggesting that only those residues involved in catalysis
are conserved among these highly divergent sequences. All the 12 characteristic
catalytic domains or subdomains described in Hanks and Quinn (1991) are easily
recognizable and marked. Additional highly conserved positions can also be observed consistent with previously constructed multiple alignments. For instance,
the initial hydrophobic consensus Gly-X-Gly-XX-Gly together with the Lys located
15 or 20 residues downstream are part of the ATP /GTP binding site. The carboxyl
terminus is characterized by the presence of an invariant Arg residue. Conserved
residues in proximity to the acceptor amino acid are found in the VIb (Asp), VII
(Asp-Phe-Gly) and VIII domains (Ala-Pro-Glu). In Figure 4, the entropy of the
emission distribution of each main state is plotted: motifs are easily detectable and
correspond to positions with very low entropy.
5
DISCUSSION
HMMs are emerging as a powerful, adaptive, and modular tool for computational
biology. Here, they have been used, together with a new learning algorithm, to
model families of proteins. In all cases, the models derived capture all the important
statistical properties of the families. Additional results and potential applications,
such as phylogenetic tree reconstruction, classification, and superfamily modeling,
are discussed in Baldi et al. (1992).
References
Baldi, P., Chauvin, Y., Hunkapiller, T. and McClure, M. A. (1992) Adaptive Algorithms for Modeling and Analysis of Biological Primary Sequence Information.
Technical Report.
Cardon, L. R. and Stormo, G. D. (1992) Expectation Maximization Algorithm
for Identifying Protein-binding Sites with Variable Lengths from Unaligned DNA
Hidden Markov Models in Molecular Biology: New Algorithms and Applications
CD2a
MLCK
PSKH
CAPK
WEEl
CSRC
EGFR
PDGF
VFES
RAFl
CMOS
HSVK
an~KR--LEKVGEGTYGVVYKALDLrpg--QGQRVVALK------KIRLESEDEGVPSTAIREISLLKEL-K-DDNIVRLYDIVH
--FSMnsKEALGGGKFGAVCTCTEK-----STGLKLAAK---VI-KKQTPKDKE----MVMLEIEVMNQL-N-HRNLIQLYAAIE
akYDI--KALIGRGSFSRVVRVEHR-----ATRQPYAIK---MIETKYREGRE-----VCESELRVLRRV-R-HANIIQLVEVFE
dqFER--IKTLGTGSFGRVMLVKHM-----ETGNHYAMK---ILDKQKVVKLKQIE--HTLNEKRILQAV-N-FPFLVKLEFSFK
trFRN--VTLLGSGEFSEVFQVEDPv----EKTLKYAVK---KL-KVKFSGPKERN--RLLQEVSIQRALkG-HDHIVELMDSWE
esLRL--EVKLGQGCFGEVWMGTWN------GTTRVAIK---TLKPGNMSPE------AFLQEAQVMKKL-R-HEKLVQLYAVVS
teFKK--IKVLGSGAFGTVYKGLWIpege-KVKIPVAIK---ELREATSPKANK----EILDEAYVMASV-D-NPHVCRLLGICL
dqLVL--GRTLGSGAFGQVVEATAHglshsQATMKVAVK---MLKSTARSSEKQ----ALMSELY--GDL--v-DYLHRNKHTFL
edLVL--GEQIGRGNFGEVFSGRLR-----ADNTLVAVK---SCRETLPPDIKA----KFLQEAKILKQY-S-HPNIVRLIGVCT
seVML--STRIGSGSFGTVYKGKWH--------GDVAVK---ILKVVDPTPEQFQ---AFRNEVAVLRKT-R-HVNILLFMGYMT
eqVCL--LQRLGAGGFGSVYKATYR-------GVPVAIKQvNKCTKNRLASRR-----SFWAELNV-ARL-R-HDNIVRVVAAST
mgFTI--HGALTPGSEGCVFDSSHP-----DYFQRVIVK - -----AGWYT--------STSHEARLLRRL-D-HPAILPLLDLHV
? ............... , ....?.. * .?................... ? . ..
0
???????
?
??????????????????????????????????
? .....?..?............. I . ......?.?....... . .... I I ...........?.??....... I I I .........?... IV .....
CD2E
MLCK
PSKH
CAPK
WEEl
CSRC
EGFR
PDGF
VFES
RAFl
CMOS
HSVK
SDAHk---------LY-L-V-FEFLDL-DLKRYMEGIpkd--------------------------------------------TPHE----------IV-L-F-KEYIEGGELFERIVDE-----------------------------------------------TQER----------VY-M-V-MELATGGELFDRIIAK-----------------------------------------------DNSN----------LY-M-V-MEYVPGGEMFSHLRRI-----------------------------------------------HGGF----------LY-M-Q-VELCENGSLDRFLEEQgql---------------------------------------------EEP----------IY-I-V-TEYMSKGSLLDFLKGE------------------------------------------------TST----------VQ-L-I-TQLMPFGCLLDYVREH------------------------------------------------QRHsnkhcppsaeLYs-n-a--LPVGFSLPSHLNLTgesdggymdmskdesidyvpmldmkgdikyadiespsymapydnyvps
QKQP----------IY-I-V-MELVQGGDFLTFLRTE------------------------------------------------KDN----------LA-I-V-TQWCEGSSLYKHLHVQ-----------------------------------------------RTPAgsnsl-----GT-I-I-MEFGGNVTLHQVIYGAaghpegdaqephcrtg-------------------------------VSGV----------TC-L-V-LPKYQA-DLYTYLSRR------------------------------------------------
????????????? ?? ?????????????????? V ???? ? ??????????????????????????????????????????????????????
CD2a
MLCK
PSKH
CAPK
WEEl
CSRC
EGFR
PDGF
VFES
RAFl
CMOS
HSVK
---------------QP-LGADIVKKFMMQ-LCKGIAYCHSHRILHRDLKPQNLL-INKDG---N-LKLGDFGLARAFGVPLRAY
--------------DYH-LTEVDTMVFVRQ-ICDGILFMHKMRVLHLDLKPENILcVNTTG---H1VKIIDFGLARRYNPNEKL---------------GS-FTERDATRVLQM-VLDGVRYLHALGITHRDLKPENLL-YYHPGtdsK-IIITDFGLASARKKGDDCL
---------------GR-FSEPHARFYAAQ-IVLTFEYLHSLDLIYRDLKPENLL-IDQQG---Y-IQVTDFGFAKRVKGRT-----------------SR-LDEFRVWKILVE-VALGLQFIHHKNYVHLDLKPANVM-ITFEG---T-LKIGDFGMASVWPVPRG---------------MGKyLRLPQLVDMAAQ-IASGMAYVERMNYVHRDLRAANIL-VGENL---V-CKVADFGLARLIEDNEYTA
--------------KDN-IGSQYLLNWCVQ-IAKGMNYLEDRRLVHRDLAARNVL-VKTPQ---H-VKITDFGLAKLLGAEEKEY
apertyratlinds-PV-LSYTDLVGFSYQ-VANGMDFLASKNCVHRDLAARNVL-ICEGK---L-VKICDFGLARDIMRDSNYI
--------------GAR-LRMKTLLQMVGD-AAAGMEYLESKCCIHRDLAARNCL-VTEKN---V-LKISDFGMSREAADGIYAA
--------------ETK-FQMFQLIDIARQ-TAQGMDYLKAKNIIHRDMKSNNIF-LHEGL---T-VKIGDFGLATVKSRWSGSQ
---------------GQ-LSLGKCLKYSLD-VVNGLLFLHSQSIVHLDLKPANIL-ISEQD---V-CKISDFGCSEKLEDLLCFQ
--------------LNP-LGRPQIAAVSRQ-LLSAVDYIHRQGIIHRDIKTENIF-INTPE---D-ICLGDFGAACFVQGSRSSP
? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . * ???? * . . . . . . . . . . . . . . . . . . *. * ????????????
..??........?...?.?... .. ...... . VIa . . . . . . . . . . . . . . . . . . . . ... VIb ...........???.... VII ....?.......
CD2a
MLCK
PSKH
CAPK
WEEl
CSRC
EGFR
PDGF
VFES
RAFl
CMOS
HSVK
---THEIVTLWYRAPEVLLgGK---QYSTGVDTWSIGCIFAEMCNRKP---------------IFSGDSE-----IDQIFKIFRV
---KVNFGTPEFLSPEVVN-YD---QISDKTDMWSLGVITYMLLSGLS---------------PFLGDDD-----TETLNNVLSG
M--KTTCGTPEYIAPEVLV-RK---PYTNSVDMWALGVIAYILLSGTM---------------PFEDDNR-----TRLYRQILRG
---WTLCGTPEYLAPEIIL-SK---GYNKAVDWWALGVLIYEMAAGYF---------------PFFADQP-----IQIYEKIVSG
---MEREGDCEYIAPEVLA-NH---LYDKPADIFSLGITVFEAAANIV--------------LPDNGQSW-----Q----KLRSG
R--QGAKFPIKWTAPEAAL-YG---RFTIKSDVWSFGILLTELTTKGR--------------VPYPGMVN-----REVLDQVERG
H-AEGGKVPIKWMALESIL-HR---IYTHQSDVWSYGVTVWELMTFGS--------------KPYDGIPA-----SEISSILEKG
S-KGSTYLPLKWMAPESIF-NS---LYTTLSDVWSFGILLKEIFTLGG--------------TPYPELPM----NDQFYNAIKRG
S-GGLRQVPVKWTAPEALN-YG---RYSSESDVWSFGILLKETFSLGA--------------SPYPNLSN-----QQTREFVEKG
Q-VEQPTGSVLWMAPEVIR-MQdnnPFSFQSDVYSYGIVLYELMTGEL---------------PYS---R-----DQIIFMVGRG
TpSYPLGGTYTHRAPELLK-GE---GVTPKADIYSFAITLWQMTTKQA---------------PYSGERQ-----HILYAVVAYD
F-PYGIAGTIDTNAPEVLA-GD---PYTTTVDIWSAGLVIFETAVHNA-------------------------------------
? .......... . .......... . ** ...............?.... * ..... . .................
I
????????????????????????
? ....??......?.?..... VIII ???........?.... IX ..........???.??..?......... ' ............ X .?......
CD2a
MLCK
PSKH
CAPK
WEE 1
CSRC
EGFR
PDGF
VFES
RAFl
CMOS
HSVK
---LGTPNEAlwpdivylpdfkpsfpqwrrkdlsqvvpSLDPRGIDLLDKLLAYDPINRISARRAAIHPYFQES-------nwyFDEETFEA----------------------------VSDEAKDFVSNLIVKEQGARMSAAQCLAHPWLNNL-------kysYSGEPWPS----------------------------VSNLAKDFIDRLLTVDPGARMTALQALRHPWVVSM----------KVR-FPSH----------------------------FSSDLKDLLRNLLQVDLTKRFGNLKDGVNDIKNHK----------DLSDAPRLsstdngssltsssretpansii------GQGGLDRVVEWKLSPEPRNRPTIDQILATD--EVCWV--------YRMPCPPE----------------------------CPESLHDLMCQCWRRDPEERPTFEYLQAFLEDYFT----------ERLPQPPI----------------------------CTIDVYKIMVKCWKIDADSRPKFRELIIEFSKMAR----------YRMAQPAH----------------------------ASDEIYEIMQKCKEEKFETRPPFSQLVLLLERLLGEGykkky---GRLPCPEL----------------------------CPDAVFRLMEQCWAYEPGQRPSFSAIYQEL---------------YASPDLsKlykn------------------------CPKAMKRLVADCVKKVKEERPLFPQILSSIELLQH----------LRPSLSAAvfedsl----------------------PGQRLGDVIQRCWRPSAAQRPSARLLLVDLTSLKA--------
? ..... ................................................ ............?.............. .........
........? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .................?... . XI ......??..............
Figure 3: Kinase alignment of 12 representative sequences.
753
754
Baldi, Chauvin, Hunkapiller, and McClure
Main State Entropy Values
10
20
30
40
50
60
70
eo
90 100110120130140
150 160 170 1 eo 190 200 210 220 230 240 250 260 270 2eo
Entropy Distribution
,
,
,
,
0.0
0.5
2.5
3.0
Figure 4: Kinase emission entropy plot and distribution.
Fragments. Journal of Molecular Biology, 223,159-170.
Churchill, G. A. (1989) Stochastic Models for Heterogeneous DNA Sequences. Bulletin of Mathematical Biology, 51, 1, 79-94.
Hanks, S. K., Quinn, A. M. (1991) Protein Kinase Catalytic Domain Sequences
Database: Identification of Conserved Features of Primary Structure and Classification of Family Members. Methods in Enzymology, 200, 38-62.
Haussler, D., Krogh, A., Mian, S. and Sjolander, K. (1992) Protein Modeling using
Hidden Markov Models. Computer and Information Sciences Technical Report
(UCSC-CRL-92-93), University of California, Santa Cruz.
Hunkapiller, T. and Hood, L. (1989) Diversity of the Immunoglobulin Gene Superfamily. Advances in Immunology, 44, 1-63, Academic Press, Inc.
Hunter, T. (1987) A Thousand and One Protein Kinases. Cell, 50, 823-829.
Lawrence, C. E. and Reilly, A. A. (1990) An Expectation Maximization (EM) Algorithm for the Identification and Characterization of Common Sites in Unaligned
Biopolymer Sequences. Proteins: Struct. Funct. Genet., 7, 41-51.
Rabiner, L. R. (1989) A Tutorial on Hidden Markor Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77, 2, 257-286.
| 629 |@word private:1 briefly:2 mammal:1 phosphorylation:1 initial:2 fragment:1 egfr:5 terminus:1 ala:1 current:1 virus:1 si:1 must:1 cruz:1 remove:1 treating:1 plot:1 update:2 selected:1 beginning:1 tertiary:1 characterization:1 phylogenetic:2 mathematical:1 dn:1 constructed:1 ucsc:1 consists:2 baldi:10 recognizable:1 introduce:1 expected:1 terminal:1 xx:1 churchill:2 ykv:2 backbone:2 emerging:1 every:1 lys:1 ly:3 bind:1 receptor:1 id:1 path:7 yd:1 bird:1 examined:1 suggests:1 antigen:1 co:1 hmms:8 range:1 responsible:1 hood:2 procedure:4 acceptor:1 reilly:2 specificity:2 protein:15 cannot:1 baum:4 independently:1 welch:4 abrupt:1 identifying:1 m2:1 haussler:3 importantly:1 biopolymer:1 variation:2 resp:4 target:2 programming:1 cysteine:1 recognition:2 particularly:2 located:1 database:1 observed:3 bottom:1 capture:2 thousand:1 region:6 removed:1 insertion:6 dynamic:2 tyrosine:1 trained:5 segment:1 funct:1 division:2 easily:2 alphabet:2 describe:1 phe:1 cathy:1 herpes:1 header:3 heuristic:1 stanford:1 modular:1 otherwise:1 statistic:3 online:1 sequence:42 advantage:1 transistor:1 net:1 reconstruction:2 gly:4 unaligned:2 product:1 reset:1 gq:1 aligned:2 loop:1 cmos:5 tim:1 ij:1 krogh:1 arl:1 stochastic:1 sand:1 biological:3 adjusted:1 insert:4 hold:1 proximity:1 lawrence:2 viterbi:8 stormo:1 phylogenetically:1 bond:1 visited:1 neutralize:1 peptide:1 tool:2 rna:1 asp:2 derived:3 emission:7 protease:1 catalytic:2 likelihood:4 detect:2 motif:2 destruction:1 foreign:1 typically:1 hidden:8 pasadena:1 going:1 wij:3 arg:1 classification:5 dual:1 among:1 sjolander:1 special:2 equal:1 once:1 never:1 biology:11 identical:1 park:1 seventy:1 weel:4 np:1 others:1 report:2 randomly:1 wee:1 resulted:1 consisting:1 detection:1 normalisation:1 highly:4 alignment:13 chain:1 emit:1 calc:1 marcella:1 nucleotide:1 tree:2 iv:2 plotted:1 immunoglobulin:9 nij:4 delete:2 effector:1 instance:1 modeling:4 eep:1 maximization:2 subset:4 snr:2 gr:1 gd:1 immunology:1 together:3 iy:2 serine:2 yg:2 na:1 again:1 opposed:1 containing:1 account:1 suggesting:1 tii:2 potential:1 diversity:1 inc:2 vi:1 later:1 portion:1 start:3 sort:1 ably:1 yves:1 ni:3 acid:4 characteristic:1 efficiently:1 rabiner:2 correspond:1 identification:3 produced:1 hunter:2 none:1 reach:1 frequency:1 involved:1 naturally:1 associated:2 mi:2 proof:1 gdl:1 irvine:1 knowledge:1 dyh:1 hank:2 stage:1 mian:1 mode:3 laboratory:1 cardon:2 ambiguous:1 maintained:1 correponds:1 eii:2 complete:1 pro:1 consideration:1 recently:1 common:3 absorbing:1 functional:1 qp:1 exponentially:1 nh:1 discussed:2 gibbs:1 atp:1 similarly:1 gt:1 base:4 termed:1 hydrophobic:1 accomplished:1 lnp:1 conserved:6 seen:1 minimum:2 additional:4 eo:3 converge:1 signal:2 ii:1 multiple:8 smooth:2 technical:2 jet:1 academic:1 characterized:2 determination:1 mcclure:6 molecular:6 basic:3 heterogeneous:1 expectation:2 represent:2 globin:2 cell:4 addition:1 residue:8 whereas:1 unlike:1 sr:1 induced:1 member:1 emitted:2 structural:1 symmetrically:1 presence:1 fungi:1 psychology:1 architecture:4 perfectly:1 genet:1 pir:1 penalty:1 speech:3 remark:1 etk:1 tij:4 clear:1 santa:1 dna:3 glu:1 exist:1 percentage:1 vy:1 tutorial:1 per:2 yy:1 t1j:1 tst:1 backward:2 downstream:1 enforced:1 letter:6 powerful:1 place:1 family:17 throughout:1 mii:1 antibody:1 dash:1 convergent:1 display:2 g:1 constraint:1 department:2 em:2 character:1 invariant:1 taken:1 equation:2 resource:1 previously:1 vq:1 detectable:1 mechanism:1 needed:2 ge:1 end:4 available:1 regulate:1 quinn:2 pierre:1 save:1 batch:3 struct:1 subdomains:1 carboxyl:1 include:2 classical:1 added:1 primary:4 usual:2 evolutionary:2 hmm:5 propulsion:1 cellular:1 consensus:1 chauvin:6 viii:2 length:11 modeled:1 vib:2 kdn:2 proper:1 boltzmann:1 kinase:13 adjustable:1 upper:1 markov:7 discarded:1 disulphide:1 communication:1 kl:1 california:5 deletion:4 able:2 usually:1 gar:1 including:5 gtp:1 treated:1 natural:1 hr:1 mn:2 technology:3 coupled:1 utterance:1 prior:1 prototypical:2 consistent:1 vij:1 course:1 institute:4 bulletin:1 fifth:1 superfamily:2 calculated:1 transition:7 forward:3 jump:1 adaptive:2 sj:1 gene:1 ml:1 reveals:1 leader:2 xi:1 search:1 sk:1 molecule:1 ca:2 menlo:1 hunkapiller:6 eukaryotic:1 domain:3 did:2 significance:1 main:6 evolutionarily:1 amino:3 site:3 representative:2 salk:1 n:1 position:2 pv:1 ix:1 rk:1 symbol:4 divergent:1 essential:1 incorporating:1 catalysis:1 kr:1 accomodate:2 gap:1 vii:4 entropy:5 tc:1 eij:3 likely:3 kvr:1 contained:2 binding:2 mij:3 corresponds:1 viewed:1 marked:2 towards:1 absence:1 crl:1 included:1 typical:1 except:1 pys:1 la:1 indicating:1 exception:1 |
5,847 | 6,290 | Measuring the reliability of MCMC inference with
bidirectional Monte Carlo
Roger B. Grosse
Department of Computer Science
University of Toronto
Siddharth Ancha
Department of Computer Science
University of Toronto
Daniel M. Roy
Department of Statistics
University of Toronto
Abstract
Markov chain Monte Carlo (MCMC) is one of the main workhorses of probabilistic
inference, but it is notoriously hard to measure the quality of approximate posterior
samples. This challenge is particularly salient in black box inference methods,
which can hide details and obscure inference failures. In this work, we extend
the recently introduced bidirectional Monte Carlo [GGA15] technique to evaluate
MCMC-based posterior inference algorithms. By running annealed importance
sampling (AIS) chains both from prior to posterior and vice versa on simulated data,
we upper bound in expectation the symmetrized KL divergence between the true
posterior distribution and the distribution of approximate samples. We integrate
our method into two probabilistic programming languages, WebPPL [GS] and Stan
[CGHL+ p], and validate it on several models and datasets. As an example of how
our method be used to guide the design of inference algorithms, we apply it to
study the effectiveness of different model representations in WebPPL and Stan.
1
Introduction
Markov chain Monte Carlo (MCMC) is one of the most important classes of probabilistic inference
methods and underlies a variety of approaches to automatic inference [e.g. LTBS00; GMRB+08;
GS; CGHL+ p]. Despite its widespread use, it is still difficult to rigorously validate the effectiveness
of an MCMC inference algorithm. There are various heuristics for diagnosing convergence, but
reliable quantitative measures are hard to find. This creates difficulties both for end users of automatic
inference systems and for experienced researchers who develop models and algorithms.
In this paper, we extend the recently proposed bidirectional Monte Carlo (BDMC) [GGA15] method
to evaluate certain kinds of MCMC-based inference algorithms by bounding the symmetrized KL
divergence (Jeffreys divergence) between the distribution of approximate samples and the true
posterior distribution. Specifically, our method is applicable to algorithms which can be viewed as
importance sampling over an extended state space, such as annealed importance sampling (AIS;
[Nea01]) or sequential Monte Carlo (SMC; [MDJ06]). BDMC was proposed as a method for
accurately estimating the log marginal likelihood (log-ML) on simulated data by sandwiching the true
value between stochastic upper and lower bounds which converge in the limit of infinite computation.
These log-likelihood values were used to benchmark marginal likelihood estimators. We show that it
can also be used to measure the accuracy of approximate posterior samples obtained from algorithms
like AIS or SMC. More precisely, we refine the analysis of [GGA15] to derive an estimator which
upper bounds in expectation the Jeffreys divergence between the distribution of approximate samples
and the true posterior distribution. We show that this upper bound is quite accurate on some toy
distributions for which both the true Jeffreys divergence and the upper bound can be computed exactly.
We refer to our method of bounding the Jeffreys divergence by sandwiching the log-ML as Bounding
Divergences with REverse Annealing (BREAD).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
While our method is only directly applicable to certain algorithms such as AIS or SMC, these
algorithms involve many of the same design choices as traditional MCMC methods, such as the
choice of model representation (e.g. whether to collapse out certain variables), or the choice of
MCMC transition operators. Therefore, the ability to evaluate AIS-based inference should also yield
insights which inform the design of MCMC inference algorithms more broadly.
One additional hurdle must be overcome to use BREAD to evaluate posterior inference: the method
yields rigorous bounds only for simulated data because it requires an exact posterior sample. One
would like to be sure that the results on simulated data accurately reflect the accuracy of posterior
inference on the real-world data of interest. We present a protocol for using BREAD to diagnose
inference quality on real-world data. Specifically, we infer hyperparameters on the real data, simulate
data from those hyperparameters, measure inference quality on the simulated data, and validate the
consistency of the inference algorithm?s behavior between the real and simulated data. (This protocol
is somewhat similar in spirit to the parametric bootstrap [ET98].)
We integrate BREAD into the tool chains of two probabilistic programming languages: WebPPL
[GS] and Stan [CGHL+ p]. Both probabilistic programming systems can be used as automatic
inference software packages, where the user provides a program specifying a joint probabilistic model
over observed and unobserved quantities. In principle, probabilistic programming has the potential to
put the power of sophisticated probabilistic modeling and efficient statistical inference into the hands
of non-experts, but realizing this vision is challenging because it is difficult for a non-expert user
to judge the reliability of results produced by black-box inference. We believe BREAD provides a
rigorous, general, and automatic procedure for monitoring the quality of posterior inference, so that
the user of a probabilistic programming language can have confidence in the accuracy of the results.
Our approach to evaluating probabilistic programming inference is closely related to independent
work [CTM16] that is also based on the ideas of BDMC. We discuss the relationships between both
methods in Section 4.
In summary, this work includes four main technical contributions. First, we show that BDMC yields
an estimator which upper bounds in expectation the Jeffreys divergence of approximate samples
from the true posterior. Second, we present a technique for exactly computing both the true Jeffreys
divergence and the upper bound on small examples, and show that the upper bound is often a
good match in practice. Third, we propose a protocol for using BDMC to evaluate the accuracy
of approximate inference on real-world datasets. Finally, we extend both WebPPL and Stan to
implement BREAD, and validate BREAD on a variety of probabilistic models in both frameworks.
As an example of how BREAD can be used to guide modeling and algorithmic decisions, we use
it to analyze the effectiveness of different representations of a matrix factorization model in both
WebPPL and Stan.
2
Background
2.1
Annealed Importance Sampling
Annealed importance sampling (AIS; [Nea01]) is a Monte Carlo algorithm commonly used to estimate
(ratios of) normalizing constants. More carefully, fix a sequence of T distributions p1 , . . . , pT , with
pt (x) = ft (x)/Zt . The final distribution in the sequence, pT , is called the target distribution; the
first distribution, p1 , is called the initial distribution. It is required that one can obtain one or more
exact samples from p1 .1 Given a sequence of reversible MCMC transition operators T1 , . . . , TT ,
where Tt leaves pt invariant, AIS produces a (nonnegative) unbiased estimate of ZT /Z1 as follows:
first, we sample a random initial state x1 from p1 and set the initial weight w1 = 1. For every stage
t 2 we update the weight w and sample the state xt according to
wt
wt
1
ft (xt 1 )
ft 1 (xt 1 )
xt
sample from Tt (x | xt
1) .
(1)
Neal [Nea01] justified AIS by showing that it is a simple importance sampler over an extended state
space (see Appendix A for a derivation in our notation). From this analysis, it follows that the weight
wT is an unbiased estimate of the ratio ZT /Z1 . Two trivial facts are worth highlighting: when Z1
1
Traditionally, this has meant having access to an exact sampler. However, in this work, we sometimes have
access to a sample from p1 , but not a sampler.
2
is known, Z1 wT is an unbiased estimate of ZT , and when ZT is known, wT /ZT is an unbiased
estimate of 1/Z1 . In practice, it is common to repeat the AIS procedure to produce K independent
estimates and combine these by simple averaging to reduce the variance of the overall estimate.
In most applications of AIS, the normalization constant ZT for the target distribution pT is the
focus of attention, and the initial distribution p1 is chosen to have a known normalization constant
Z1 . Any sequence of intermediate distributions satisfying a mild domination criterion suffices to
produce a valid estimate, but in typical applications, the intermediate distributions are simply defined
to be geometric averages ft (x) = f1 (x)1 t fT (x) t , where the t are monotonically increasing
parameters with 1 = 0 and T = 1. (An alternative approach is to average moments [GMS13].)
In the setting of Bayesian posterior inference over parameters ? and latent variables z given some
fixed observation y, we take f1 (?, z) = p(?, z) to be the prior distribution (hence Z1 = 1), and we
take fT (?, z) = p(?, z, y) = p(?, z) p(y | ?, z). This can be viewed as the unnormalized posterior
distribution, whose normalizing constant ZT = p(y) is the marginal likelihood. Using geometric
averaging, the intermediate distributions are then
ft (?, z) = p(?, z) p(y | ?, z) t .
(2)
In addition to moment averaging, reasonable intermediate distributions can be produced in the
Bayesian inference setting by conditioning on a sequence of increasing subsets of data; this insight
relates AIS to the seemingly different class of sequential Monte Carlo (SMC) methods [MDJ06].
2.2
Stochastic lower bounds on the log partition function ratio
? of the ratio R = ZT /Z1 of partition functions.
AIS produces a nonnegative unbiased estimate R
Unfortunately, because such ratios often vary across many orders of magnitude, it frequently happens
? underestimates R with overwhelming probability, while occasionally taking extremely large
that R
values. Correspondingly, the variance may be extremely large, or even infinite.
For these reasons, it is more meaningful to estimate log R. Unfortunately, the logarithm of a
nonnegative unbiased estimate (such as the AIS estimate) is, in general, a biased estimator of the log
? Then, by Jensen?s
estimand. More carefully, let A? be a nonnegative unbiased estimator for A = E[A].
? ? log E[A]
? = log A, and so log A? is a lower bound on log A in expectation. The
inequality, E[log A]
estimator log A? satisfies another important property: by Markov?s inequality for nonnegative random
variables, Pr(log A? > log A + b) < e b , and so log A? is extremely unlikely to overestimate log A
by any appreciable number of nats. These observations motivate the following definition [BGS15]: a
? satisfying E[X]
? ? X and Pr(X
? > X + b) < e b .
stochastic lower bound on X is an estimator X
Stochastic upper bounds are defined analogously. The above analysis shows that log A? is a stochastic
? is a
lower bound on log A when A? is a nonnegative unbiased estimate of A, and, in particular, log R
stochastic lower bound on log R. (It is possible to strengthen the tail bound by combining multiple
samples [GBD07].)
2.3
Reverse AIS and Bidirectional Monte Carlo
Upper and lower bounds are most useful in combination, as one can then sandwich the true value. As
described above, AIS produces a stochastic lower bound on the ratio R; many other algorithms do as
well. Upper bounds are more challenging to obtain. The key insight behind bidirectional Monte Carlo
(BDMC; [GGA15]) is that, provided one has an exact sample from the target distribution pT , one can
run AIS in reverse to produce a stochastic lower bound on log Rrev = log Z1 /ZT , and therefore a
stochastic upper bound on log R = log Rrev . (In fact, BDMC is a more general framework which
allows a variety of partition function estimators, but we focus on AIS for pedagogical purposes.)
More carefully, for t = 1, . . . , T , define p?t = pT t+1 and T?t = TT t+1 . Then p?1 corresponds
to our original target distribution pT and p?T corresponds to our original initial distribution p1 . As
before, T?t leaves p?t invariant. Consider the estimate produced by AIS on the sequence of distributions
p?1 , . . . , p?T and corresponding MCMC transition operators T?1 , . . . , T?T . (In this case, the forward
? rev
chain of AIS corresponds to the reverse chain described in Section 2.1.) The resulting estimate R
? rev is a stochastic lower bound
is a nonnegative unbiased estimator of Rrev . It follows that log R
? rev1 is a stochastic upper bound on log R = log Rrev1 . BDMC is
on log Rrev , and therefore log R
3
simply the combination of this stochastic upper bound with the stochastic lower bound of Section 2.2.
Because AIS is a consistent estimator of the partition function ratio under the assumption of ergodicity
[Nea01], the two bounds converge as T ! 1; therefore, given enough computation, BDMC can
sandwich log R to arbitrary precision.
Returning to the setting of Bayesian inference, given some fixed observation y, we can apply BDMC
provided we have exact samples from both the prior distribution p(?, z) and the posterior distribution
p(?, z|y). In practice, the prior is typically easy to sample from, but it is typically infeasible to
generate exact posterior samples. However, in models where we can tractably sample from the joint
distribution p(?, z, y), we can generate exact posterior samples for simulated observations using the
elementary fact that
p(y) p(?, z|y) = p(?, z, y) = p(?, z) p(y|?, z).
(3)
In other words, if one ancestrally samples ?, z, and y, this is equivalent to first generating a dataset y
and then sampling (?, z) exactly from the posterior. Therefore, for simulated data, one has access to a
single exact posterior sample; this is enough to obtain stochastic upper bounds on log R = log p(y).
2.4
WebPPL and Stan
We focus on two particular probabilistic programming packages. First, we consider WebPPL [GS], a
lightweight probabilistic programming language built on Javascript, and intended largely to illustrate
some of the important ideas in probabilistic programming. Inference is based on Metropolis?Hastings
(M?H) updates to a program?s execution trace, i.e. a record of all stochastic decisions made by the
program. WebPPL has a small and clean implementation, and the entire implementation is described
in an online tutorial on probabilistic programming [GS].
Second, we consider Stan [CGHL+ p], a highly engineered automatic inference system which is
widely used by statisticians and is intended to scale to large problems. Stan is based on the No U-Turn
Sampler (NUTS; [HG14]), a variant of Hamiltonian Monte Carlo (HMC; [Nea+11]) which chooses
trajectory lengths adaptively. HMC can be significantly more efficient than M?H over execution
traces because it uses gradient information to simultaneously update multiple parameters of a model,
but is less general because it requires a differentiable likelihood. (In particular, this disallows discrete
latent variables unless they are marginalized out analytically.)
3
Methods
There are at least two criteria we would desire from a sampling-based approximate inference algorithm
in order that its samples be representative of the true posterior distribution: we would like the
approximate distribution q(?, z; y) to cover all the high-probability regions of the posterior p(?, z | y),
and we would like it to avoid placing probability mass in low-probability regions of the posterior. The
former criterion motivates measuring the KL divergence DKL (p(?, z | y) k q(?, z; y)), and the latter
criterion motivates measuring DKL (q(?, z; y) k p(?, z | y)). If we desire both simultaneously, this
motivates paying attention to the Jeffreys divergence, defined as DJ (qkp) = DKL (qkp) + DKL (pkq).
In this section, we present Bounding Divergences with Reverse Annealing (BREAD), a technique for
using BDMC to bound the Jeffreys divergence from the true posterior on simulated data, combined
with a protocol for using this technique to analyze sampler accuracy on real-world data.
3.1
Upper bounding the Jeffreys divergence in expectation
We now present our technique for bounding the Jeffreys divergence between the target distribution
and the distribution of approximate samples produced by AIS. In describing the algorithm, we revert
to the abstract state space formalism of Section 2.1, since the algorithm itself does not depend
on any structure specific to posterior inference (except for the ability to obtain an exact sample).
? Let
We first repeat the derivation from [GGA15] of the bias of the stochastic lower bound log R.
v = (x1 , . . . , xT 1 ) denote all of the variables sampled in AIS before the final stage; the final state
xT corresponds to the approximate sample produced by AIS. We can write the distributions over the
forward and reverse AIS chains as:
qf wd (v, xT ) = qf wd (v) qf wd (xT | v)
(4)
qrev (v, xT ) = pT (xT ) qrev (v | xT ).
(5)
4
The distribution of approximate samples qf wd (xT ) is obtained by marginalizing out v. Note that
sampling from qrev requires sampling exactly from pT , so strictly speaking, BREAD is limited
to those cases where one has at least one exact sample from pT ? such as simulated data from a
probabilistic model (see Section 2.3).
? of the log partition function ratio is given by:
The expectation of the estimate log R
?
? = Eq (v,x ) log fT (xT ) qrev (v | xT )
E[log R]
(6)
T
f wd
Z1 qf wd (v, xT )
= log ZT log Z1 DKL (qf wd (xT ) qf wd (v | xT ) k pT (xT ) qrev (v | xT ))
(7)
? log ZT log Z1 DKL (qf wd (xT ) k pT (xT )).
(8)
(Note that qf wd (v | xT ) is the conditional distribution of the forward chain, given that the final state is
xT .) The inequality follows because marginalizing out variables cannot increase the KL divergence.
We now go beyond the analysis in [GGA15], to bound the bias in the other direction. The expectation
? rev is
of the reverse estimate R
?
? rev ] = Eq (x ,v) log Z1 qf wd (v, xT )
E[log R
(9)
rev
T
fT (xT ) qrev (v | xT )
= log Z1 log ZT DKL (pT (xT ) qrev (v|xT ) k qf wd (xT ) qf wd (v | xT ))
(10)
? log Z1 log ZT DKL (pT (xT ) k qf wd (xT )).
(11)
? and log R
? rev1 can both be seen as estimators of log ZT , the former of
As discussed above, log R
Z1
which is a stochastic lower bound, and the latter of which is a stochastic upper bound. Consider the
? rev1 log R.
? It follows from Eqs. (8) and (11) that, in
gap between these two bounds, B? , log R
expectation, B? upper bounds the Jeffreys divergence
J , DJ (pT (xT ), qf wd (xT )) , DKL (pT (xT ) k qf wd (xT )) + DKL (qf wd (xT ) k pT (xT ))
between the target distribution pT and the distribution qf wd (pT ) of approximate samples.
(12)
Alternatively, if one happens to have some other lower bound L or upper bound U on log R, then one
can bound either of the one-sided KL divergences by running only one direction of AIS. Specifically,
?
? rev1 L]
from Eq. (8), E[U log R]
DKL (qf wd (xT ) k pT (xT )), and from Eq. (11), E[log R
DKL (pT (xT ) k qf wd (xT )).
? as an upper bound on J ? We evaluated both B and J exactly
How tight is the expectation B , E[B]
on some toy distributions and found them to be a fairly good match. Details are given in Appendix B.
3.2
Application to real-world data
So far, we have focused on the setting of simulated data, where it is possible to obtain an exact
posterior sample, and then to rigorously bound the Jeffreys divergence using BDMC. However, we
are more likely to be interested in evaluating the performance of inference on real-world data, so
we would like to simulate data which resembles a real-world dataset of interest. One particular
difficulty is that, in Bayesian analysis, hyperparameters are often assigned non-informative or weakly
informative priors, in order to avoid biasing the inference. This poses a challenge for BREAD, as
datasets generated from hyperparameters sampled from such priors (which are often very broad) can
be very dissimilar to real datasets, and hence conclusions from the simulated data may not generalize.
In order to generate simulated datasets which better match a real-world dataset of interest, we adopt
the following heuristic scheme: we first perform approximate posterior inference on the real-world
? real denote the estimated hyperparameters. We then simulate parameters and data from
dataset. Let ?
? real )p(D | ?
? real , ?). The forward AIS chain is run on D in the usual way.
the forward model p(? | ?
However, to initialize the reverse chain, we first start with (?
? real , ?), and then run some number of
MCMC transitions which preserve p(?, ? | D), yielding an approximate posterior sample (? ? , ? ? ).
? real was not sampled from p(? | D).
In general, (? ? , ? ? ) will not be an exact posterior sample, since ?
? real which generated D ought to be in a region of high posterior
However, the true hyperparameters ?
? real . Therefore, we expect even
mass unless the prior p(?) concentrates most of its mass away from ?
a small number of MCMC steps to produce a plausible posterior sample. This motivates our use of
(? ? , ? ? ) in place of an exact posterior sample. We validate this procedure in Section 5.1.2.
5
4
Related work
Much work has been devoted to the diagnosis of Markov chain convergence (e.g. [CC96; GR92;
BG98]). Diagnostics have been developed both for estimating the autocorrelation function of
statistics of interest (which determines the number of effective samples from an MCMC chain) and
for diagnosing whether Markov chains have reached equilibrium. In general, convergence diagnostics
cannot confirm convergence; they can only identify particular forms of non-convergence. By contrast,
BREAD can rigorously demonstrate convergence in the simulated data setting.
There has also been much interest in automatically configuring parameters of MCMC algorithms.
Since it is hard to reliably summarize the performance of an MCMC algorithm, such automatic
configuration methods typically rely on method-specific analyses. For instance, Roberts and Rosenthal
[RR01] showed that the optimal acceptance rate of Metropolis?Hastings with an isotropic proposal
distribution is 0.234 under fairly general conditions. M?H algorithms are sometimes tuned to
achieve this acceptance rate, even in situations where the theoretical analysis doesn?t hold. Rigorous
convergence measures might enable more direct optimization of algorithmic hyperparameters.
Gorham and Mackey [GM15] presented a method for directly estimating the quality of a set of
approximate samples, independently of how those samples were obtained. This method has strong
guarantees under a strong convexity assumption. By contrast, BREAD makes no assumptions about
the distribution itself, so its mathematical guarantees (for simulated data) are applicable even to
multimodal or badly conditioned posteriors.
It has been observed that heating and cooling processes yield bounds on log-ratios of partition
functions by way of finite difference approximations to thermodynamic integration. Neal [Nea96]
used such an analysis to motivate tempered transitions, an MCMC algorithm based on heating and
cooling a distribution. His analysis cannot be directly applied to measuring convergence, as it assumed
equilibrium at each temperature. Jarzynski [Jar97] later gave a non-equilibrium analysis which is
equivalent to that underlying AIS [Nea01].
We have recently learned of independent work [CTM16] which also builds on BDMC to evaluate the
accuracy of posterior inference in a probabilistic programming language. In particular, CusumanoTowner and Mansinghka [CTM16] define an unbiased estimator for a quantity called the subjective
divergence. The estimator is equivalent to BDMC except that the reverse chain is initialized from an
arbitrary reference distribution, rather than the true posterior. In [CTM16], the subjective divergence
is shown to upper bound the Jeffreys divergence when the true posterior is used; this is equivalent to
our analysis in Section 3.1. Much less is known about subjective divergence when the reference distribution is not taken to be the true posterior. (Our approximate sampling scheme for hyperparameters
can be viewed as a kind of reference distribution.)
5
Experiments
In order to experiment with BREAD, we extended both WebPPL and Stan to run forward and reverse
AIS using the sequence of distributions defined in Eq. (2). The MCMC transition kernels were the
standard ones provided by both platforms. Our first set of experiments was intended to validate that
BREAD can be used to evaluate the accuracy of posterior inference in realistic settings. Next, we
used BREAD to explore the tradeoffs between two different representations of a matrix factorization
model in both WebPPL and Stan.
5.1
Validation
As described above, BREAD returns rigorous bounds on the Jeffreys divergence only when the data
are sampled from the model distribution. Here, we address three ways in which it could potentially
give misleading results. First, the upper bound B may overestimate the true Jeffreys divergence J .
Second, results on simulated data may not correspond to results on real-world data if the simulated
data are not representative of the real-world data. Finally, the fitted hyperparameter procedure of
Section 3.2 may not yield a sample sufficiently representative of the true posterior p(?, ? | D). The
first issue, about the accuracy of the bound, is addressed in Appendix B.1.1; the bound appears to be
fairly close to the true Jeffreys divergence on some toy distributions. We address the other two issues
in this section. In particular, we attempted to validate that the behavior of the method on simulated
6
(a)
(b)
(c)
Figure 1: (a) Validation of the consistency of the behavior of forward AIS on real and simulated data for
the logistic regression model. Since the log-ML values need not match between the real and simulated data,
the y-axes for each curve are shifted based on the maximum log-ML lower bound obtained by forward AIS.
(b) Same as (a), but for matrix factorization. The complete set of results on all datasets is given in Appendix D.
(c) Validation of the fitted hyperparameter scheme on the logistic regression model (see Section 5.1.2 for details).
Reverse AIS curves are shown as the number of Gibbs steps used to initialize the hyperparameters is varied.
data is consistent with that on real data, and that the fitted-hyperparameter samples can be used as a
proxy for samples from the posterior. All experiments in this section were performed using Stan.
5.1.1
Validating consistency of inference behavior between real and simulated data
To validate BREAD in a realistic setting, we considered five models based on examples from the Stan
manual [Sta], and chose a publicly available real-world dataset for each model. These models include:
linear regression, logistic regression, matrix factorization, autoregressive time series modeling, and
mixture-of-Gaussians clustering. See Appendix C for model details and Stan source code.
In order to validate the use of simulated data as a proxy for real data in the context of BREAD,
we fit hyperparameters to the real-world datasets and simulated data from those hyperparameters,
as described in Section 3.2. In Fig. 1 and Appendix D, we show the distributions of forward and
reverse AIS estimates on simulated data and forward AIS estimates on real-world data, based on 100
AIS chains for each condition.2 Because the distributions of AIS estimates included many outliers,
we visualize quartiles of the estimates rather than means.3 The real and simulated data need not
have the same marginal likelihood, so the AIS estimates for real and simulated data are shifted
vertically based on the largest forward AIS estimate obtained for each model. For all five models
under consideration, the forward AIS curves were nearly identical (up to a vertical shift), and the
distributions of AIS estimates were very similar at each number of AIS steps. (An example where the
forward AIS curves failed to match up due to model misspecification is given in Appendix D.) Since
the inference behavior appears to match closely between the real and simulated data, we conclude
that data simulated using fitted hyperparameters can be a useful proxy for real data when evaluating
inference algorithms.
5.1.2
Validating the approximate posterior over hyperparameters
As described in Section 3.2, when we simulate data from fitted hyperparameters, we use an approximate (rather than exact) posterior sample (? ? , ? ? ) to initialize the reverse chain. Because of
this, BREAD is not mathematically guaranteed to upper bound the Jeffreys divergence even on the
simulated data. In order to determine the effect of this approximation in practice, we repeated the
procedure of Section 5.1.1 for all five models, but varying S, the number of MCMC steps used to
obtain (? ? , ? ? ), with S 2 {10, 100, 1000, 10000}. The reverse AIS estimates are shown in Fig. 1
and Appendix D. (We do not show the forward AIS estimates because these are unaffected by S.) In
all five cases, the reverse AIS curves were statistically indistinguishable. This validates our use of
fitted hyperparameters, as it suggests that the use of approximate samples of hyperparameters has
little impact on the reverse AIS upper bounds.
2
The forward AIS chains are independent, while the reverse chains share an initial state.
Normally, such outliers are not a problem for AIS, because one averages the weights wT , and this average is
insensitive to extremely small values. Unfortunately, the analysis of Section 3.1 does not justify such averaging,
so we report estimates corresponding to individual AIS chains.
3
7
(a)
(b)
Figure 2: Comparison of Jeffreys divergence bounds for matrix factorization in Stan and WebPPL, using the
collapsed and uncollapsed formulations. Given as a function of (a) number of MCMC steps, (b) running time.
5.2
Scientific findings produced by BREAD
Having validated various aspects of BREAD, we applied it to investigate the choice of model representation in Stan and WebPPL. During our investigation, we also uncovered a bug in WebPPL, indicating
the potential usefulness of BREAD as a means of testing the correctness of an implementation.
5.2.1
Comparing model representations
Many models can be written in more than one way, for example by introducing or collapsing latent
variables. Performance of probabilistic programming languages can be sensitive to such choices of
representation, and the representation which gives the best performance may vary from one language
to another. We consider the matrix factorization model described above, which we now specify in
more detail. We approximate an N ? D matrix Y as a low rank matrix, the product of matrices
U and V with dimensions N ? K and K ? D respectively (where K < min(N, D)). We use a
spherical Gaussian observation model, and spherical Gaussian priors on U and V:
uik ? N (0,
2
u)
vkj ? N (0,
2
v)
yij | ui , vj ? N (u>
i vj ,
2
)
We can also collapse U to obtain the model vkj ? N (0,
and yi | V ? N (0, u V> V + I).
In general, collapsing variables can help MCMC samplers mix faster at the expense of greater
computational cost per update. The precise tradeoff can depend on the size of the model and dataset,
the choice of MCMC algorithm, and the underlying implementation, so it would be useful to have a
quantitative criterion to choose between them.
2
v)
We fixed the values of all hyperparameters to 1, and set N = 50, K = 5 and D = 25. We ran
BREAD on both platforms (Stan and WebPPL) and for both formulations (collapsed and uncollapsed)
(see Fig. 2). The simulated data and exact posterior sample were shared between all conditions in
order to make the results directly comparable.
As predicted, the collapsed sampler resulted in slower updates but faster convergence (in terms of
the number of steps). However, the per-iteration convergence benefit of collapsing was much larger
in WebPPL than in Stan (perhaps because of the different underlying inference algorithm). Overall,
the tradeoff between efficiency and convergence speed appears to favour the uncollapsed version in
Stan, and the collapsed version in WebPPL (see Fig. 2(b)). (Note that this result holds only for our
particular choice of problem size; the tradeoff may change given different model or dataset sizes.)
Hence BREAD can provide valuable insights into the tricky question of which representations of
models to choose to achieve faster convergence.
5.2.2
Debugging
Mathematically, the forward and reverse AIS chains yield lower and upper bounds on log p(y) with
high probability; if this behavior is not observed, that indicates a bug. In our experimentation with
WebPPL, we observed a case where the reverse AIS chain yielded estimates significantly lower than
those produced by the forward chain, inconsistent with the theoretical guarantee. This led us to
find a subtle bug in how WebPPL sampled from a multivariate Gaussian distribution (which had the
effect that the exact posterior samples used to initialize the reverse chain were incorrect).4 These
days, while many new probabilistic programming languages are emerging and many are in active
development, such debugging capabilities provided by BREAD can potentially be very useful.
4
Issue: https://github.com/probmods/webppl/issues/473
8
References
[BG98]
S. P. Brooks and A. Gelman. ?General methods for monitoring convergence of
iterative simulations?. Journal of Computational and Graphical Statistics 7.4 (1998),
pp. 434?455.
[BGS15]
Y. Burda, R. B. Grosse, and R. Salakhutdinov. ?Accurate and conservative estimates of
MRF log-likelihood using reverse annealing?. In: Artificial Intelligence and Statistics.
2015.
[CC96]
M. K. Cowles and B. P. Carlin. ?Markov chain Monte Carlo convergence diagnostics:
a comparative review?. Journal of the American Statistical Association 91.434 (1996),
pp. 883?904.
[CGHL+ p] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A.
Brubaker, J. Guo, P. Li, and A. Riddell. ?Stan: a probabilistic programming language?.
Journal of Statistical Software (in press).
[CTM16]
M. F. Cusumano-Towner and V. K. Mansinghka. Quantifying the probable approximation error of probabilistic inference programs. arXiv:1606.00068. 2016.
[ET98]
B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall/CRC,
1998.
[GBD07]
V. Gogate, B. Bidyuk, and R. Dechter. ?Studies in lower bounding probability of
evidence using the Markov inequality?. In: Conference on Uncertainty in AI. 2007.
[GGA15]
R. B. Grosse, Z. Ghahramani, and R. P. Adams. Sandwiching the marginal likelihood
with bidirectional Monte Carlo. arXiv:1511.02543. 2015.
[GM15]
J. Gorham and L. Mackey. ?Measuring sample quality with Stein?s method?. In:
Neural Information Processing Systems. 2015.
[GMRB+08] N. D. Goodman, V. K. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum.
?Church: a language for generative models?. In: Conference on Uncertainty in AI.
2008.
[GMS13]
R. Grosse, C. J. Maddison, and R. Salakhutdinov. ?Annealing between distributions
by averaging moments?. In: Neural Information Processing Systems. 2013.
[GR92]
A. Gelman and D. B. Rubin. ?Inference from iterative simulation using multiple
sequences?. Statistical Science 7.4 (1992), pp. 457?472.
[GS]
N. D. Goodman and A. Stuhlm?ller. The Design and Implementation of Probabilistic
Programming Languages. http://dippl.org.
[HG14]
M. D. Homan and A. Gelman. ?The No-U-turn Sampler: Adaptively Setting Path
Lengths in Hamiltonian Monte Carlo?. J. Mach. Learn. Res. 15.1 (Jan. 2014),
pp. 1593?1623. ISSN: 1532-4435.
[Jar97]
C. Jarzynski. ?Equilibrium free-energy differences from non-equilibrium measurements: a master-equation approach?. Physical Review E 56 (1997), pp. 5018?5035.
[LTBS00]
D. J. Lunn, A. Thomas, N. Best, and D. Spiegelhalter. ?WinBUGS ? a Bayesian modelling framework: concepts, structure, and extensibility?. Statistics and Computing
10.4 (2000), pp. 325?337.
[MDJ06]
P. del Moral, A. Doucet, and A. Jasra. ?Sequential Monte Carlo samplers?. Journal of
the Royal Statistical Society: Series B (Statistical Methodology) 68.3 (2006), pp. 411?
436.
[Nea+11]
R. M. Neal et al. ?MCMC using Hamiltonian dynamics?. Handbook of Markov Chain
Monte Carlo 2 (2011), pp. 113?162.
[Nea01]
R. M. Neal. ?Annealed importance sampling?. Statistics and Computing 11 (2001),
pp. 125?139.
[Nea96]
R. M. Neal. ?Sampling from multimodal distributions using tempered transitions?.
Statistics and Computing 6.4 (1996), pp. 353?366.
[RR01]
G. O. Roberts and J. S. Rosenthal. ?Optimal scaling for various Metropolis?Hastings
algorithms?. Statistical Science 16.4 (2001), pp. 351?367.
[Sta]
Stan Modeling Language Users Guide and Reference Manual. Stan Development
Team.
9
| 6290 |@word mild:1 version:2 simulation:2 moment:3 initial:6 configuration:1 lightweight:1 series:2 uncovered:1 daniel:1 disallows:1 tuned:1 subjective:3 wd:20 comparing:1 com:1 must:1 written:1 dechter:1 realistic:2 partition:6 informative:2 update:5 mackey:2 intelligence:1 leaf:2 generative:1 isotropic:1 realizing:1 hamiltonian:3 record:1 provides:2 toronto:3 org:1 diagnosing:2 five:4 mathematical:1 direct:1 incorrect:1 combine:1 autocorrelation:1 behavior:6 p1:7 frequently:1 salakhutdinov:2 spherical:2 siddharth:1 automatically:1 little:1 overwhelming:1 increasing:2 spain:1 estimating:3 notation:1 provided:4 nea:2 mass:3 underlying:3 kind:2 emerging:1 developed:1 unobserved:1 finding:1 ought:1 guarantee:3 quantitative:2 every:1 exactly:5 returning:1 tricky:1 normally:1 configuring:1 overestimate:2 t1:1 before:2 vertically:1 limit:1 despite:1 mach:1 path:1 black:2 gr92:2 might:1 chose:1 resembles:1 specifying:1 challenging:2 suggests:1 collapse:2 smc:4 factorization:6 limited:1 statistically:1 testing:1 practice:4 implement:1 bootstrap:2 procedure:5 jan:1 bidyuk:1 significantly:2 confidence:1 word:1 cannot:3 close:1 operator:3 gelman:4 put:1 context:1 collapsed:4 equivalent:4 annealed:5 go:1 attention:2 independently:1 focused:1 estimator:13 insight:4 his:1 traditionally:1 qkp:2 pt:22 target:6 user:5 exact:16 programming:15 strengthen:1 us:1 roy:2 satisfying:2 particularly:1 cooling:2 observed:4 ft:9 region:3 extensibility:1 valuable:1 ran:1 convexity:1 ui:1 nats:1 rigorously:3 dynamic:1 motivate:2 depend:2 tight:1 weakly:1 creates:1 efficiency:1 multimodal:2 joint:2 various:3 bg98:2 derivation:2 revert:1 effective:1 monte:16 goodrich:1 artificial:1 gorham:2 quite:1 heuristic:2 whose:1 widely:1 plausible:1 larger:1 ability:2 statistic:7 itself:2 validates:1 final:4 seemingly:1 online:1 sequence:8 differentiable:1 propose:1 product:1 combining:1 gm15:2 achieve:2 bug:3 validate:9 convergence:14 produce:7 generating:1 uncollapsed:3 comparative:1 adam:1 help:1 derive:1 develop:1 illustrate:1 pose:1 mansinghka:3 eq:6 paying:1 strong:2 predicted:1 judge:1 direction:2 concentrate:1 closely:2 stochastic:18 quartile:1 engineered:1 enable:1 crc:1 fix:1 suffices:1 f1:2 investigation:1 probable:1 elementary:1 mathematically:2 yij:1 strictly:1 hold:2 sufficiently:1 considered:1 hall:1 equilibrium:5 algorithmic:2 visualize:1 vary:2 adopt:1 purpose:1 applicable:3 sensitive:1 largest:1 vice:1 correctness:1 tool:1 hoffman:1 gaussian:3 rather:3 avoid:2 varying:1 ax:1 focus:3 validated:1 rank:1 likelihood:8 indicates:1 modelling:1 contrast:2 rigorous:4 nea96:2 inference:42 unlikely:1 typically:3 entire:1 interested:1 overall:2 issue:4 development:2 platform:2 integration:1 fairly:3 initialize:4 marginal:5 having:2 sampling:12 chapman:1 identical:1 placing:1 broad:1 nearly:1 report:1 sta:2 simultaneously:2 divergence:28 preserve:1 individual:1 resulted:1 intended:3 statistician:1 sandwich:2 interest:5 acceptance:2 highly:1 investigate:1 mixture:1 yielding:1 diagnostics:3 behind:1 devoted:1 chain:25 accurate:2 unless:2 logarithm:1 initialized:1 re:1 theoretical:2 fitted:6 instance:1 formalism:1 modeling:4 lunn:1 cover:1 measuring:5 cost:1 introducing:1 subset:1 usefulness:1 chooses:1 adaptively:2 combined:1 probabilistic:22 lee:1 analogously:1 w1:1 reflect:1 choose:2 collapsing:3 expert:2 american:1 return:1 toy:3 li:1 potential:2 includes:1 later:1 performed:1 diagnose:1 analyze:2 sandwiching:3 reached:1 start:1 capability:1 contribution:1 publicly:1 accuracy:8 ancestrally:1 variance:2 who:1 largely:1 yield:6 identify:1 correspond:1 generalize:1 bayesian:5 accurately:2 produced:7 carlo:16 monitoring:2 notoriously:1 researcher:1 worth:1 trajectory:1 unaffected:1 homan:1 inform:1 manual:2 definition:1 failure:1 underestimate:1 energy:1 pp:11 sampled:5 dataset:7 efron:1 subtle:1 sophisticated:1 carefully:3 appears:3 bidirectional:6 day:1 methodology:1 specify:1 formulation:2 evaluated:1 box:2 roger:1 stage:2 ergodicity:1 hand:1 hastings:3 reversible:1 del:1 widespread:1 logistic:3 quality:6 perhaps:1 scientific:1 believe:1 effect:2 concept:1 true:17 unbiased:10 former:2 hence:3 analytically:1 assigned:1 nut:1 neal:5 indistinguishable:1 during:1 unnormalized:1 criterion:5 tt:4 pedagogical:1 demonstrate:1 workhorse:1 complete:1 temperature:1 consideration:1 cc96:2 recently:3 common:1 physical:1 conditioning:1 insensitive:1 extend:3 tail:1 discussed:1 association:1 refer:1 measurement:1 versa:1 gibbs:1 ai:52 automatic:6 consistency:3 language:12 dj:2 reliability:2 had:1 access:3 pkq:1 posterior:43 multivariate:1 hide:1 showed:1 reverse:21 occasionally:1 certain:3 inequality:4 tempered:2 yi:1 seen:1 additional:1 somewhat:1 greater:1 converge:2 determine:1 ller:1 monotonically:1 relates:1 multiple:3 thermodynamic:1 bread:26 infer:1 mix:1 technical:1 match:6 faster:3 dkl:12 impact:1 underlies:1 variant:1 regression:4 mrf:1 vision:1 expectation:9 arxiv:2 iteration:1 sometimes:2 normalization:2 kernel:1 justified:1 background:1 hurdle:1 addition:1 proposal:1 annealing:4 addressed:1 source:1 goodman:2 biased:1 sure:1 validating:2 inconsistent:1 spirit:1 effectiveness:3 intermediate:4 enough:2 easy:1 variety:3 fit:1 gave:1 carlin:1 reduce:1 idea:2 tradeoff:4 shift:1 favour:1 whether:2 moral:1 speaking:1 useful:4 involve:1 stein:1 tenenbaum:1 generate:3 http:2 tutorial:1 shifted:2 estimated:1 rosenthal:2 per:2 tibshirani:1 broadly:1 diagnosis:1 discrete:1 write:1 hyperparameter:3 key:1 salient:1 four:1 clean:1 estimand:1 run:4 package:2 uncertainty:2 master:1 place:1 reasonable:1 decision:2 appendix:8 scaling:1 comparable:1 bound:50 guaranteed:1 refine:1 g:6 nonnegative:7 badly:1 yielded:1 precisely:1 software:2 aspect:1 simulate:4 speed:1 extremely:4 min:1 winbugs:1 jasra:1 department:3 according:1 jarzynski:2 combination:2 vkj:2 debugging:2 across:1 metropolis:3 rev:5 happens:2 jeffreys:18 outlier:2 invariant:2 pr:2 sided:1 taken:1 equation:1 discus:1 turn:2 describing:1 end:1 available:1 gaussians:1 experimentation:1 apply:2 away:1 alternative:1 symmetrized:2 slower:1 original:2 thomas:1 running:3 include:1 clustering:1 graphical:1 marginalized:1 ghahramani:1 build:1 society:1 question:1 quantity:2 parametric:1 usual:1 traditional:1 gradient:1 simulated:30 maddison:1 trivial:1 reason:1 length:2 code:1 issn:1 relationship:1 gogate:1 ratio:9 difficult:2 unfortunately:3 hmc:2 robert:2 potentially:2 javascript:1 expense:1 trace:2 design:4 implementation:5 zt:15 motivates:4 reliably:1 perform:1 upper:25 vertical:1 observation:5 markov:8 datasets:7 benchmark:1 finite:1 situation:1 extended:3 misspecification:1 precise:1 team:1 brubaker:1 varied:1 arbitrary:2 introduced:1 required:1 kl:5 z1:16 learned:1 barcelona:1 nip:1 tractably:1 address:2 beyond:1 brook:1 biasing:1 challenge:2 summarize:1 program:4 built:1 reliable:1 royal:1 power:1 difficulty:2 rely:1 scheme:3 github:1 misleading:1 spiegelhalter:1 stan:21 church:1 prior:8 geometric:2 review:2 marginalizing:2 betancourt:1 expect:1 validation:3 integrate:2 consistent:2 proxy:3 rubin:1 principle:1 share:1 obscure:1 qf:19 summary:1 repeat:2 free:1 infeasible:1 guide:3 bias:2 burda:1 taking:1 correspondingly:1 benefit:1 overcome:1 curve:5 dimension:1 transition:7 world:14 evaluating:3 valid:1 doesn:1 forward:17 commonly:1 made:1 autoregressive:1 far:1 approximate:21 confirm:1 ml:4 doucet:1 active:1 handbook:1 assumed:1 conclude:1 alternatively:1 latent:3 iterative:2 bonawitz:1 learn:1 protocol:4 vj:2 main:2 bounding:7 hyperparameters:17 heating:2 repeated:1 x1:2 carpenter:1 fig:4 representative:3 uik:1 grosse:4 riddell:1 precision:1 experienced:1 third:1 xt:43 specific:2 showing:1 jensen:1 normalizing:2 evidence:1 sequential:3 importance:7 magnitude:1 execution:2 conditioned:1 gap:1 led:1 simply:2 likely:1 explore:1 failed:1 highlighting:1 desire:2 ancha:1 corresponds:4 satisfies:1 determines:1 webppl:19 conditional:1 viewed:3 quantifying:1 appreciable:1 shared:1 hard:3 change:1 included:1 specifically:3 infinite:2 typical:1 except:2 wt:6 sampler:9 averaging:5 justify:1 conservative:1 called:3 attempted:1 meaningful:1 domination:1 indicating:1 guo:1 latter:2 stuhlm:1 meant:1 dissimilar:1 evaluate:7 mcmc:23 cowles:1 |
5,848 | 6,291 | Unsupervised Learning from Noisy Networks with
Applications to Hi-C Data
Bo Wang?1 , Junjie Zhu2 , Oana Ursu3 , Armin Pourshafeie4 , Serafim Batzoglou1 and Anshul Kundaje3,1
2
1
Department of Computer Science, Stanford University
Department of Electrical Engineering, Stanford University
3
Department of Genetics, Stanford University
4
Department of Physics, Stanford University
Abstract
Complex networks play an important role in a plethora of disciplines in natural
sciences. Cleaning up noisy observed networks poses an important challenge in
network analysis. Existing methods utilize labeled data to alleviate the noise the
noise levels. However, labeled data is usually expensive to collect while unlabeled
data can be gathered cheaply. In this paper, we propose an optimization framework
to mine useful structures from noisy networks in an unsupervised manner. The key
feature of our optimization framework is its ability to utilize local structures as
well as global patterns in the network. We extend our method to incorporate multiresolution networks in order to add further resistance in the presence of high-levels
of noise. The framework is generalized to utilize partial labels in order to further
enhance the performance. We empirically test the effectiveness of our method in
denoising a network by demonstrating an improvement in community detection
results on multi-resolution Hi-C data both with and without Capture-C-generated
partial labels.
1
Introduction
Complex networks emerge in a plethora of disciplines including computer science, social sciences,
biology and etc. They entail non-trivial topological features and patterns critical to understanding
interactions within complicated systems. However, observed networks from data are typically noisy
due to imperfect measurements. The adverse effects of noise pose a critical challenge in unraveling
clear structures and dynamics in the networks. Therefore, network denoising can strongly influence
how the networks are interpreted, and can significantly improve the outcome of down-stream analysis
such as global and local community detection.
The goal of community detection is to identify meaningful structures/communities underlying the
provided samples in an unsupervised manner. While the performance of community detection
algorithms can worsen due to noise [1], one may use prior knowledge about the structure of the
communities, such as the presence of clusters, to recover local networks [25]. In addition to the
special structure that one may expect in a network, a small portion of high confidence links may be
available. The combination of the special structure and the confident links can be used to denoise
the network that might include both noisy or missing links. How to incorporate multiple sources
of information to construct a network has been widely studied in the context of data fusion or data
aggregation [3].
Biology offers a special case where overall structure of the network of interest might be known
from the science but the data may be riddled with noise. One example of this is the 3D structure, or
folding, of DNA. In biology, this structure is important as, among other things, the DNA topology
?
[email protected]
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
has been shown to have a fundamental impact on gene expression in biological processes [4]. For
example, many genes are in 3D contact with genomic regions that are far away in the linear genome
but close in 3D space. These genomic regions contain regulatory elements that control when the
gene is active [5, 6, 34]. The rules by which regulatory elements come in contact with their target
genes are still unclear [7]. While the exact mechanism for this selective interaction between the
regulatory elements and the target genes is unknown, the 3D organization of the genome in domains
of interaction seem to play a crucial role. Furthermore, topologically associated domains (TADs) [8],
where local interactions are observed, are of potential biological interest as they have been shown to
be conserved between mice and humans, which suggests an ancient root for higher order structures in
the genome [8].
The interaction map of regulatory elements in a genome can be viewed as a network, where each
node is a regulatory element and each link represents the interaction strength between two of the
elements. In this context, we not only have prior knowledge about the types of structures in this
interaction network, but we also have various types of noisy and incomplete observations of the links
in this network based on recently-developed technologies.
One approach to observe these links is Hi-C, an experiment which uses high-throughput sequencing to
construct a 2D contact map measuring the frequency with which pairs of genomic regions co-localize
in 3D space. The results of Hi-C experiments can be summarized at multiple resolutions (lower
resolutions are obtained by binning genomic regions together), ranging from 1 kb to 1 Mb[8?10].
Higher resolution maps capture more fine grained location interactions but the background noise
generated by random collisions can be larger [11]. Lower resolution maps have less noise, but at
the cost of losing the exact localization of genomic contacts. In addition to Hi-C, there are other
experiment variants such as 4C, 5C and Capture-C technologies, which provide a new window into
detection of a small number of interaction links with high confidence [12?15], by focusing sequencing
resources on a selected subset of contacts. For these experiments the increased confidence comes at
the cost of not measuring the full contact map. Thus, the integration of multi-resolution noisy Hi-C
data with high-confidence Capture-C data is not only an interesting problem in the context of general
network densoising but it is also biologically relevant.
1.1
Related Work
Many applications in biology utilize multiple measurements to construct a single biological network.
General approaches such as [16] have relied on specific models to reveal structures from the multiple
measurements. However, some biological networks do not fit these model assumptions (e.g. Gaussian).
For example, while Hi-C data can be summarized at multiple resolutions, standard model assumptions
are not appropriate for combining the resolutions.
Furthermore, one may acquire a small subset of highly confident measurements. In the case of Hi-C
data, this can be done through Capture-C [12, 13, 15] technologies. While matrix completion is a
well studied problem [2] to recover missing measurements, the setting with Capture-C is slightly
different. In particular, the number of highly confident entries for the n ? n adjacency matrix of
rank r may be less than nr log n, which is suggested for matrix completion [2]. Additionally, such
a method would not take advantage of the large amount of data available, albeit with higher noise,
from different sources.
A common application of a denoised networks is to more reliably detect biologically-relevant
communities. General community detection methods have been used to find protein complexes
[17, 18], genetically related subpopulations [19], like-minded individuals in a social network [20],
and many other tasks [21, 22]. Aside from the all-purpose algorithms mentioned above, there are
specialized algorithms for the problem of community detection in Hi-C data. Rao et al. define TADs
using the specialized Arrowhead algorithm [10]. Cabreros et al. used a mixed-membership stochastic
block model to discover communities in Hi-C data [23]. Their method can detect the number of
communities or can be forced to find a specified number of communities. Dixon et. al defined a
directionality index that quantifies the asymmetry between the upstream and downstream interaction
bias for a position [8]. A hidden Markov model was subsequently used to detect biased states based
on these scores [8].
1.2
Our Contribution
As mentioned above, Hi-C data can be represented by many different resolutions. Although the
data and noise from these resolutions are not independent, the different resolutions still contain
2
information that can help denoise the data. We propose a model-free optimization framework to
extract information from the different resolutions to denoise the Hi-C data. While generic community
detection methods are limited to using only a single input network, our optimization framework is
able to pool data from different resolutions, and produces a single denoised network. This framework
allows us to apply community detection methods to multi-resolution Hi-C data.
Furthermore, in special cases, a subset of the interaction network may be known with a high confidence
using Capture-C [12]. To our knowledge, there is no algorithm with the capability of taking advantage
of this highly confident set to improve the denoising of Hi-C data. Our framework is able to take
a multi-resolution network in addition to the confident set of data to denoise the corresponding
network. Applying our framework to datasets with simulated ground-truth communities derived from
chromosomes 14 and 21 of GM12878 in [10], we find that our framework can indeed leverage the
multiple sources of information to reveal the communities underlying the noisy and missing data.
2
Problem Setup
2.1 General Learning Framework
Throughout this paper, we will use a real and symmetric n ? n matrix to represent a network on n
nodes. Accordingly, the (i, j)th entry of the matrix will be used to denote the weight or intensity of a
link between node i and node j.
Suppose we want to construct a weighted network S 2 Rn?n from a noisy observation W 2 Rn?n
on the same nodes, where the noise introduces false-positive and false-negative links. If the network
of interest S is low rank, then this inherent structure can be used to denoise W . This intuition that the
detected noisy matrix could lie near an underlying low-rank or sparse matrix is also key to subspace
detection algorithms, such as Sparse Subspace Clustering [25] and Low-Rank Representation [26].
We use this intuition to formulate our optimization framework below:
minimize
tr W > S +
with respect to S 2 R
L(S, F ) + ||S||2F
n?n
(OPT1)
n?C
, F 2R
X
subject to F > F = IC ,
Sij = 1,
j
where L(S, F ) = tr(F > (In
Sij
0 for all (i, j),
S)F ),
here , > 0 are tuning parameters (see Appendix 7.3). F is an auxiliary C-dimensional variable
(with C < n) and is constrained to consist of orthogonal columns. S is constrained to be a stochastic
matrix and further regularized by the squared Frobenius norm, i.e. ||S||2F .
In order to represent the resulting denoised network, the solution S can be made symmetric by
(S + S > )/2. In addition, the objective and constraints in (OPT1) ensure two key properties for S to
represent a denoised network:
Property (1): S complies well with the links in network W .
The first term in the objective function of (OPT1) involves maximizing the Frobenius product of S
and W , i.e.,
X
tr(W > S) =
Wij Sij .
i,j
so each link in S is consistent with W . Taking the sum of the element-wise products allows S to be
invariant to scaling of W .
Property (2): S is low rank and conveys cluster structures.
The term L(S, F ) in (OPT1) is an imposed graph regularization on S so that it is embedded in a
low-dimensional space spanned by F . To see this, first note that (In S) is the graph Laplacian of S
as the row sums (and column sums) of S is 1. It can be shown that
X
L(S, F ) = tr(F > (In S)F ) =
||fi fj ||22 Sij ,
i,j
where fi and fj are the ith and jth rows of F respectively. Thus, each row of F can be interpreted
as a C-dimensional embedding of the corresponding node in the network. Here, || ? ||2 denotes the
`2 -norm, so the minimization of L(S, F ) enforces link Sij to capture the Euclidean distance of node
i and node j in the vector space spanned by F .
3
2.2
Learning from multi-resolution networks
The general graph denoising framework above can be easily extended to incorporate additional
information. Suppose instead of a single observation W , we have m noisy observations or representations of the underlying network S. Denote these observations as W1 , ..., Wm . We refer to this
multi-resolution network as W , where each link in W contains m different ordered values. (This
terminology is not only used to conveniently correspond to the Hi-C interaction maps at different
resolutions, but it also helps to remind us that the noise in each network is not necessarily identical
or stochastic.) A multi-resolution network consists of different representations of S and provides
more modeling power than a single-resolution network [32]. We can use this additional information
to extend (OPT1) to the following optimization problem:
minimize
tr
?? X
`
? ` W`
with respect to S 2 Rn?n ,
?> ?
S +
L(S, F ) + ||S||2F + P (?)
(OPT2)
F 2 Rn?C , ? 2 Rm
X
X
subject to F > F = IC ,
Sij = 1,
?` = 1, Sij 0 for all (i, j), ?`
j
`
X
where L(S, F ) = tr(F > (In S)F ), P (?) =
?` log ?` ,
0 for all `.
`
where , , > 0 are tuning parameters (see Appendix 7.3). The vector ? = [?1 , ..., ?m ]> weights
the m observed networks W1 , ..., Wm and needs to be learned from the data.
The modification to the first term in the objective in (OPT1) from that in (OPT2) allows S to
simultaneously conform with all of the networks according to their importance. To avoid overfitting
with the weights or selecting a single noisy network, we regularize ? via P (?) in the objective of
(OPT2). In our application, we chose P (?) so that the entropy of ? is high, but one may select other
penalties for P (?). (e.g. L1 or L2 penalties)
While (OPT2) is non-convex with respect to all three variables S, L, ?, the problem is convex with
respect to each variable conditional on fixing the other variables. Therefore, we apply an alternating
convex optimization method to solve this tri-convex problem efficiently. The three optimization
problems are solved iteratively until all the solutions converge. The following explains how each
variable is initialized and updated.
(1) Initialization.
The variables S, L and ? are initialized as
?(0) =
1
1m ,
m
S (0) =
X
(0)
`
? ` W` ,
(0)
(0)
F (0) = [v1 , ..., vC ]
where 1m is a length-m vector of ones, i.e, 1m = [1, ..., 1]> . The weight vector ? is set to be
a uniform vector to avoid bias, and S is initialized to be the sum of the individual observed
networks Wi according to the initial weights. Finally, F is initialized to be the top C eigenvectors
(0)
(0)
of S, denoted as v1 , ..., vC .
(2) Updating S with fixed F and ?.
When we minimize the objective function only with respect to the similarity matrix S in (OPT2),
we can solve the equivalent problem:
minimize
X
?X
i,j
n?n
with respect to S 2 R
X
subject to
Sij = 1,
j
`
?
?` (W` )i,j + (F F > )i,j Si,j +
Sij
X
i,j
2
Si,j
(OPT3)
0 for all (i, j).
This optimization problem is clearly convex because the objective is quadratic in Si,j and the
constraints are all linear. We used the KKT conditions to solve for the updates of S. Details of
the solution are provided in Appendix 7.1.
4
(3) Updating F with fixed S and ?. When we minimize the objective function only with respect to
the similarity matrix F in (OPT2), we can solve the equivalent problem:
minimize tr(F > (In
with respect to
F 2R
(OPT4)
S)F )
n?C
subject to F > F = IC .
This optimization problem can also be interpreted as solving the eigenvalue problem for (S In )
because the trace of F > (S In )F is maximized when F is a set of orthogonal bases of the
eigen-space associated with the C largest eigenvalues of (S In ). We used standard numerical
toolboxes in MATLAB to solve for the eigenvectors.
(4) Updating ? with fixed F and S.
Now treating S and F as parameters, the equivalent problem with respect to ? becomes a simple
linear programming problem:
X
X
X
minimize
?`
(W` )i,j Si,j +
?` log ?`
(OPT5)
`
i,j
with respect to ? 2 Rm
X
subject to
?` = 1,
`
`
?`
0 for all `.
Using the optimality conditions, we derived a close-form solution for ?` for each `:
? P (W ) S ?
` i,j i,j
i,j
exp
P
?
?,
?` = P
i,j (W` )i,j Si,j
` exp
Details are provided in Appendix 7.2.
(5) Termination.
The alternating optimization terminates when all three variables S, F , and ? converge. Even
though alternating optimization techniques are widely-used heuristic approaches, the parameters
converged in approximately 20 iterations in the applications we have considered.
2.3 Learning from multi-resolution networks and highly confidence links
Now suppose in addition to a multi-resolution network, we are given noiseless (or highly confident)
information about the presence of certain links in the network. More formally, we are given a set P,
such that if a link (i, j) 2 P, then we know that it is almost surely a true positive link. If (i, j) 2
/ P,
then we only know that this link was unobserved, and have no information whether or not it is present
or absent in the true denoised network.
In the applications we consider, there is typically a subset of nodes for which all of their incident
links are unobserved. So if we consider a binary adjacency matrix on these nodes based on P, a
number of columns (or rows) will indeed have all missing values. Therefore, the only information we
have about these nodes are their incident noisy links in the multi-resolution network.
The formulation in (OPT2) can easily incorporate the positive set P. For each node i, we denote
Pi = {j : (i, j) 2 P} additional parameters and formulate an extended optimization problem
??X
?> ?
minimize
f (S) ? tr
?` W` S + L(S, F ) + ||S||2F + P (?) (OPT6)
`
F 2R
, ? 2 Rm
X
X
subject to F > F = IC ,
Sij = 1,
?` = 1, Sij
with respect to S 2 R
n?n
n?C
,
j
where f (S) =
`
0 for all (i, j), ?`
0 for all `
n
X
1 X
Sij , L(S, F ) and P (?) follow from (OPT2).
|Pi |
i=1
j2Pi
Notice that when applying alternating optimization to solve this problem, we can simply use the
same approach used to solve (OPT2). The only change needed is to include f (S) in the objective of
(OPT3) in order to update S.
5
3
Implementation Details
3.1 How to Determine C
We provide an intuitive way to determine the number of communities, C, in our methods. The
optimal value of C should be close to the true number of communities in the network. One possible
approach to discover the number of groups is to analyze the eigenvalues of the weight matrix and
searching for a drop in the magnitude of the eigenvalue gaps. However, this approach is very sensitive
to the noise in the weight matrix therefore can be unstable in a noisy networks. We use an alternative
approach by analyzing eigenvectors of the network, similar to [27]. Consider a network with C
disjoint communities. It is well known that the eigenvectors of the network Laplacian form a full
basis spanning the network subspace. Although presence of noise may cause this ideal case to fail, it
can still shed light on community membership. Given a specific number of communities C, we aim
to find an indication matrix Z(R) = XR, where X 2 Rn?C is the matrix of the top eigenvectors of
the network Laplacian, and R 2 RC?C is a rotation matrix. Denote [M (R)]i = maxj [Z(R)]i,j . We
search for R such that it minimizes the following cost function
X [Z(R)]2i,j
J(R) =
[M (R)]2i
i,j
Minimizing this cost function over all possible rotations will provide the best alignment with the
canonical coordinate system. This is done using the gradient descent scheme [27]. Instead of taking
the number of communities to be the one providing the minimal cost as in [27], we seek the number
of communities that result in the largest drop in the value of J(R).
3.2 Convergence Criterion
The proposed method is an iterative algorithm. It is important to determine a convergence criterion
to stop the iterations. Our method adopts a well-defined approach to decide convergence has been
reached. Similar to spectral clustering [3], we use eigengap to measure the convergence of our
method. Eigengap is defined as follows:
eigengap(i) =
where
( 1?
i
2
i+1
i
(1)
is the i-th eigenvalue of the matrix S where we sort the eigenvalues in ascending order
? . . . n ). For C clusters, we use eigengap(C) = C+1
C.
The intuition behind eigengap is that, if a similarity includes C perfectly strong clusters, then
eigengap(C) should be near zero (which was proved in [28]). Due to the low-rank constraint in our
optimization framework, we seek a small value of eigengap(C) for a good optimal value. We can set
a stopping criterion for our method using eigengap(C) < T for a small threshold, T . However, due
to the noise reaching a small threshold cannot be guaranteed, therefore, a practical stopping criterion
adopted by our method is when eigengap(C) has stoped decreasing. In our experiments we have
observed that, eigengap(C) usually decreases for around 10 iterations and then remains stable.
4
Experiments
We apply the framework presented in (OPT6) to Hi-C and Capture-C data. As explained, detecting
communities in these data has important scientific ramifications. Our denoising strategy can be part of
the pipeline for discovering these communities. We evaluated our methods on real data and checked
their robustness by adding additional noise and measuring performance.
For the real data, we started with a ground truth of domains previously identified in the GM12878
cell line chromosomes 14 and 21 [10] , filtered to only contain domains that do not have ambiguous
boundaries or that overlap due to noise in the ground truth, and stitched these together. We ran our
algorithm using data at 8 different resolutions (5 kb,10kb, 25kb, 50kb, 100kb, 250kb, 500kb, 1Mb).
A heat map of the highest and lowest resolution of the simulated data for chromosome 21 can be seen
in Figure 1.
Figure 2a shows a heat map of the denoised version of chromosome 14 using (OPT6). Below the
heat map we show the ground truth blocks. 1) The baseline (Louvain algorithm [29]) was set to
the clusters determined from the highest resolution Hi-C map (purple) . 2) The clustering improves
after denoising this map using (OPT1) (orange). 3) Pooling data through the use of multi-resolution
maps and (OPT2) further increases the size of the clusters. Finally 4) using the high confidence set,
multi-resolution and (OPT6) (blue).
6
As mentioned earlier, in order to determine ground truth, we have chosen large disjoint blocks with
low levels of noise. To test our algorithm in the presence of noise, we added distance dependent
random noise to the network. We evaluated our performance by measuring the normalized mutual
information (NMI) between the ground truth and the clusters resulting from the noisy data Figure 2b
[30]. We see that while the NMI from the baseline falls rapidly the performance of our denoising
algorithm stays relatively constant after a rapid (but significantly smaller) drop. Figure 2c shows the
weights assigned to each resolution as noise is added. We see that the weight assigned to the highest
resolution has a steep drop with a small amount of noise. This could partially explain the drop in the
performance of baseline (which is computed from the high resolution data) in Figure 2b.
To validate the obtained denoised network by our method, we check 2 features of domains: 1)
increased covariation in genomic signals such as histone marks inside domains compared to across
domains and 2) the binding of the protein CTCF at the boundaries of the domains (see Appendix 7.4).
We quantify covariation in genomic signals by focusing on 3 histone marks (H3K4ME1, H3K4ME3
and H3K27AC), and computing the correlation of these across all pairs of genomic regions, based
on measurements of these histone marks from 75 individuals [33]. We then compare the ratio
between covariants with-in domains and covariants between domains . A higher ratio indicates better
coherence of biological signals within domains while larger dispersions of signals between domains,
therefore implying better quality of the identified domains. Second, we inspect another key biological
phenomena that is binding strength of transcription factor CTCF on boundary regions [31, 35]. It
has been observed that, CTCF usually binds with boundary of domains in HiC data. This serves as
another way to validate the correctness of identified domain boundaries, by checking the fraction
of domain boundaries that contain CTCF. Figure 2d shows that our method produces a higher ratio
of specific histone marks and CTCF binding than the baseline, indicating better ability to detect
biologically meaningful boundaries.
In each experiment, we selected the number of communities C for clustering based on the implementation details in Section 3. The best C is highlighted in Figure 3a-b. The optimal C coincided with
the true number of clusters, indicating that the selection criteria was well-suited for the two datasets.
Furthermore, as shown in Figure 3c-d the alternating optimization in (OPT6) converged within 20
iterations according to the criteria in Section 3 where the eigen-gaps stabilized quickly.
Figure 1: a) Heat map of data simulated from chromosome 21. The subclusters were chosen to be clearly
distinguishable from each other (in order to have clear boundaries to determine the ground truth for the boundaries
of the blocks). The blocks were subsequently stitched to each other. b) Simulated low resolution data. c) CaptureC network: these positions are treated as low-noise data. d) Denoised Network: (OPT6) was used to denoise the
network using all 8 resolutions in addition to the Capture-C data in (c).
5
Conclusions and Future Work
In this paper we proposed an unsupervised optimization framework to learn meaningful structures
in a noisy network. We leverage multi-resolution networks to improve the robustness to noise
by automatically learning weights for different resolutions. In addition, our framework naturally
extends to incorporate partial labels. We demonstrate the performance of our approach using genomic
interaction networks generated by noisy Hi-C data. In particular, we show how incorporating
multiple Hi-C resolutions enhances the effectiveness in denoising the interaction networks. Given
partial information from Capture-C data, we further denoise the network and discover more accurate
community structures.
In the future, it would be important to extend our method to whole genome Hi-C data to get a global
view of the 3D structure of the genome. This will involve clever binning or partition of the genome to
7
(d)$Biology$Valida-on$
2
Baseline
Our Method
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
H3K4ME1
H3K4ME3
H3K27AC
CTCF Binding
Figure 2: a) Denoised Network: heatmap of denoised network using with Hi-C and Capture-C according to
(OPT6). The tracks below the heatmap indicate the division of the classified communities with respect to the
ground truth. The use of multi-resolution Hi-C and capture-C achieves the best concordance with the ground
truth. b) Clustering performance: The performance of the baseline degrades rapidly with the introduction of
noise. Our method with various inputs perform significantly better than the baseline suggesting that denoising
using our framework can significantly improve the task of clustering c) Weight Distribution: The weights (?i )
assigned to each resolution from the optimization in (OPT2). The noise increases the performance of the highest
resolution matrix decreases rapidly at first. In response, the method rapidly decreases the weight for this matrix.
d) Ratio between covariates: we used three specific histone marks and the CTCF binding sites as indicators of
the accuracy in detecting the boundaries.
Figure 3: a) - b) The gradient of J(R) over the number of communities C. The best C selected is based on the
value that minimizes the gradient of J(R) (circled in red). c) - d) The eigen-gaps over the number of iterations
in the optimization framework. The eigengaps stabilize at the same value within 20 iterations, indicating the
optimization problem converges in only a few iterations.
reduce the problem size to a more local level where clustering methods can reveal meaningful local
structures. In addition, our current framework is very modular. Even though we demonstrate our
approach with k-means clustering a module, other appropriate clustering or community detection
algorithms can be substituted for this module for whole genome Hi-C analysis. Finally, it would be
interesting to extend our approach to a semi-supervised setting where a subset of confident links are
used to train a classifier for the missing links in Capture-C data.
6
Acknowledgments
We would also like to thank Nasa Sinnott-Armstrong for initial advice on this project. JZ acknowledges support from Stanford Graduate Fellowship. AP was partially supported by Stanford Genome
Training Program: NIH 5T32HG000044-17. AK was supported by the Alfred Sloan Foundation
Fellowship. OU is supported by the HHMI International Students Research Fellowship. BW and SB
were supported by NIH Sidow grant (1R01CA183904-01A1).
8
References
[1] J. Yang, J. McAuley, and J. Leskovec Community detection in networks with node attributes IEEE International Conference on, pp.
1151-1156. (2013).
[2] E. J. Candes and Y. Plan Matrix Completion With Noise. Proceedings of the IEEE 98, 925-936. (2010)
[3] B. Wang, A. M. Mezlini, F. Demir, M. Fiume, Z. Tu, M. Brudno, B. Haibe-Kains and A. Goldenberg Similarity network fusion for
aggregating data types on a genomic scale. Nat. Methods, 11, 333-337. (2014).
[4] J. Dekker. Gene regulation in the third dimension. Science, 319(5871):1793-4. (2008).
[5] L.A. Lettice, et al. A long-range Shh enhancer regulates expression in the developing limb and fin and is associated with preaxial
polydactyly. Hum. Mol. Genet. , 12, 1725-1735. (2003).
[6] Y. Qin, L.K. Kong, C. Poirier, C. Truong, P.A. Overbeek and C.E. Bishop Long-range activation of Sox9 in Odd Sex (Ods) mice. Hum.
Mol. Genet., 13, 1213-1218. (2004).
[7] W. de Laat, D. Duboule Topology of mammalian developmental enhancers and their regulatory landscapes. Nature, 502, pp. 499-506
(2013).
[8] J. R. Dixon, S. Selvaraj, F. Yue, A. Kim, Y. Li, Y. Shen, M. Hu, J. S. Liu, B. Ren. Topological domains in mammalian genomes identified
by analysis of chromatin interactions. Nature, 485:376380 (2012).
[9] E. Lieberman-Aiden et al. Comprehensive Mapping of Long Range Interactions Reveals Folding Principles of the Human Genome.
Science, 326.5950: 289293, (2009).
[10] S. S. P. Rao et al. A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping. Cell, 159(7):
1665-1680, (2014).
[11] Shaw P.J. Mapping chromatin conformation. F1000 Biol Rep. 2 doi: 10.3410/B2-18. (2010)
[12] J. Dekker, K. Rippe, M. Dekker and N. Kleckner, Capturing Chromosome Conformation. Science 295, 1306 (2002).
[13] Z. Zhao et al., Circular chromosome conformation capture (4C) uncovers extensive networks of epigenetically regulated intra- and
interchromosomal interactions Nat. Genet. 38, 1341 (2006).
[14] M. Simonis et al. Nuclear organization of active and inactive chromatin domains uncovered by chromosome conformation capture-onchip (4C) Nat. Genet. 38, 1348 (2006).
[15] J. Dostie et al. Chromosome Conformation Capture Carbon Copy (5C): A massively parallel solution for mapping interactions between
genomic elements Genome Res. 16, 1299 (2006).
[16] J. Chiquet,Y. Grandvalet, and C. Ambroise. Inferring multiple graphical structures Statistics and Computing (2011)
[17] E.M. Marcotte, M. Pellegrini, H.-L. Ng, D.W. Rice, T.O. Yeates, and D. Eisenberg. Detecting protein function and protein-protein
interactions from genome sequences. Science, 285, pp 751-753, (1999).
[18] J. Chen and B. Yuan. Detecting functional modules in the yeast protein protein interaction network. Bioinformatics, (2006).
[19] J. K. Pritchard, M. Stephens, and P. Donnelly. Inference of Population Structure Using Multilocus Genotype Data. Genetics, (2000).
[20] M. Girvan and M. E. J. Newman. Community structure in social and biological networks. PANS, 99(12):7821-7826, (2002).
[21] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22:888-905, (1997).
[22] G. Linden, B. Smith, and J. York. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing,
7(1):76-80, (2003).
[23] I. Cabreros, E. Abbe, and A. Tsirigos Detecting Community Structures in Hi-C Genomic Data. ArXiv e-prints 1509.05121 (2015).
[24] S. S.P. Rao, et al. A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping. Cell 159, 2014.
[25] Wang, Bo, and Tu, Zhuowen Sparse subspace denoising for image manifolds. Computer Vision and Pattern Recognition (CVPR), 2013.
[26] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank representation. TPAMI, (2013).
[27] L. Zelnik-Manor and P. Perona. In L. K. Saul, Y. Weiss, and L. Bottou, Self-tuning spectral clustering. NIPS, pages 1601-1608. (2005).
[28] U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing 17.4 : 395-416 (2007).
[29] V. D. Blondel et al. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment 2008.10 :
P10008 (2008).
[30] S. Alexander, and J. Ghosh. Cluster ensembles?a knowledge reuse framework for combining multiple partitions. JMLR, (2003).
[31] ENCODE Project Consortium An integrated encyclopedia of DNA elements in the human genome. Nature, (2012).
[32] Wang, B., Jiang, J., Wang, W., Zhou, Z. H., and Tu, Z. Unsupervised metric fusion by cross diffusion. Computer Vision and Pattern
Recognition, (2012).
[33] Grubert, Fabian, et al. Genetic control of chromatin states in humans involves local and distal chromosomal interactions. Cell 162.5
(2015): 1051-1065.
[34] Ernst, Jason, and Manolis Kellis. ChromHMM: automating chromatin-state discovery and characterization. Nature methods 9.3 (2012):
215-216.
[35] Mifsud, Borbala, et al. Mapping long-range promoter contacts in human cells with high-resolution capture Hi-C. Nature genetics 47.6
(2015): 598-606.
9
| 6291 |@word kong:1 version:1 norm:2 dekker:3 sex:1 termination:1 hu:1 zelnik:1 seek:2 serafim:1 uncovers:1 tr:8 mcauley:1 initial:2 liu:2 contains:1 score:1 selecting:1 uncovered:1 genetic:1 existing:1 current:1 com:1 od:1 si:5 activation:1 numerical:1 partition:2 treating:1 drop:5 update:2 aside:1 implying:1 selected:3 discovering:1 histone:5 item:2 accordingly:1 ith:1 smith:1 filtered:1 provides:1 detecting:5 node:13 location:1 characterization:1 rc:1 yuan:1 consists:1 inside:1 manner:2 blondel:1 indeed:2 rapid:1 mechanic:1 multi:14 decreasing:1 manolis:1 automatically:1 window:1 becomes:1 provided:3 spain:1 underlying:4 discover:3 project:2 lowest:1 hic:1 interpreted:3 minimizes:2 developed:1 unobserved:2 ghosh:1 shed:1 rm:3 classifier:1 control:2 grant:1 positive:3 engineering:1 local:7 bind:1 aggregating:1 ak:1 analyzing:1 jiang:1 approximately:1 ap:1 might:2 chose:1 onchip:1 initialization:1 studied:2 collect:1 suggests:1 co:1 limited:1 graduate:1 range:4 practical:1 acknowledgment:1 enforces:1 block:5 xr:1 yan:1 significantly:4 confidence:7 subpopulation:1 protein:7 consortium:1 get:1 cannot:1 unlabeled:1 close:3 selection:1 clever:1 context:3 influence:1 applying:2 equivalent:3 map:15 imposed:1 missing:5 maximizing:1 shi:1 convex:5 resolution:42 formulate:2 shen:1 amazon:1 recovery:1 rule:1 spanned:2 regularize:1 nuclear:1 embedding:1 searching:1 ambroise:1 coordinate:1 population:1 updated:1 target:2 play:2 suppose:3 cleaning:1 exact:2 losing:1 us:1 programming:1 element:9 expensive:1 recognition:2 updating:3 mammalian:2 cut:1 labeled:2 anshul:1 observed:7 role:2 binning:2 module:3 wang:5 electrical:1 capture:18 solved:1 region:6 sun:1 decrease:3 highest:4 ran:1 mentioned:3 intuition:3 developmental:1 covariates:1 mine:1 dynamic:1 solving:1 localization:1 division:1 basis:1 easily:2 various:2 represented:1 train:1 forced:1 heat:4 fast:1 opt2:11 doi:1 detected:1 newman:1 outcome:1 heuristic:1 stanford:7 widely:2 larger:2 solve:7 modular:1 cvpr:1 ability:2 statistic:2 highlighted:1 noisy:17 advantage:2 eigenvalue:6 indication:1 sequence:1 tpami:2 propose:2 interaction:21 mb:2 product:2 qin:1 tu:3 relevant:2 combining:2 ramification:1 rapidly:4 ernst:1 multiresolution:1 intuitive:1 frobenius:2 validate:2 h3k4me1:2 convergence:4 cluster:9 asymmetry:1 plethora:2 produce:2 converges:1 help:2 completion:3 fixing:1 pose:2 conformation:5 odd:1 strong:1 auxiliary:1 involves:2 come:2 indicate:1 quantify:1 attribute:1 stochastic:3 kb:8 subsequently:2 human:7 vc:2 adjacency:2 explains:1 alleviate:1 biological:7 brudno:1 around:1 considered:1 ground:9 ic:4 exp:2 pellegrini:1 mapping:4 achieves:1 purpose:1 label:3 sensitive:1 largest:2 correctness:1 minded:1 weighted:1 minimization:1 unfolding:1 clearly:2 genomic:12 gaussian:1 h3k4me3:2 aim:1 manor:1 reaching:1 avoid:2 zhou:1 opt3:2 encode:1 derived:2 improvement:1 sequencing:2 rank:7 check:1 indicates:1 baseline:7 detect:4 kim:1 goldenberg:1 inference:1 dependent:1 stopping:2 membership:2 sb:1 typically:2 integrated:1 hidden:1 perona:1 selective:1 wij:1 overall:1 among:1 denoted:1 heatmap:2 constrained:2 special:4 zhu2:1 integration:1 orange:1 mutual:1 construct:4 plan:1 ng:1 biology:5 represents:1 identical:1 yu:1 unsupervised:5 throughput:1 abbe:1 future:2 inherent:1 opt6:7 few:1 simultaneously:1 comprehensive:1 individual:3 maxj:1 gm12878:2 bw:1 detection:12 organization:2 interest:3 highly:5 circular:1 intra:1 alignment:1 introduces:1 genotype:1 light:1 behind:1 stitched:2 accurate:1 partial:4 orthogonal:2 arrowhead:1 incomplete:1 euclidean:1 ancient:1 initialized:4 re:1 chiquet:1 minimal:1 leskovec:1 increased:2 column:3 modeling:1 earlier:1 rao:3 lieberman:1 chromosomal:1 measuring:4 cost:5 subset:5 entry:2 uniform:1 shh:1 confident:7 fundamental:1 international:2 stay:1 automating:1 physic:1 discipline:2 enhance:1 together:2 mouse:2 pool:1 quickly:1 armin:1 w1:2 squared:1 von:1 zhao:1 li:1 concordance:1 suggesting:1 potential:1 de:1 summarized:2 stabilize:1 includes:1 student:1 dixon:2 b2:1 sloan:1 stream:1 root:1 view:1 jason:1 analyze:1 portion:1 wm:2 recover:2 aggregation:1 complicated:1 relied:1 worsen:1 denoised:10 capability:1 reached:1 sort:1 candes:1 parallel:1 contribution:1 minimize:8 purple:1 collaborative:1 accuracy:1 efficiently:1 maximized:1 gathered:1 identify:1 correspond:1 landscape:1 ensemble:1 ren:1 converged:2 classified:1 explain:1 checked:1 eigengaps:1 frequency:1 pp:3 conveys:1 naturally:1 associated:3 stop:1 proved:1 covariation:2 enhancer:2 knowledge:4 improves:1 segmentation:1 ou:1 nasa:1 focusing:2 higher:5 subclusters:1 follow:1 supervised:1 response:1 wei:1 formulation:1 done:2 though:2 strongly:1 evaluated:2 furthermore:4 until:1 correlation:1 quality:1 reveal:3 scientific:1 yeast:1 effect:1 contain:4 true:4 normalized:2 regularization:1 riddled:1 assigned:3 alternating:5 symmetric:2 iteratively:1 distal:1 self:1 ambiguous:1 criterion:6 generalized:1 demonstrate:2 l1:1 fj:2 ranging:1 wise:1 image:2 recently:1 fi:2 nih:2 common:1 rotation:2 specialized:2 functional:1 empirically:1 regulates:1 extend:4 opt1:7 measurement:6 refer:1 tuning:3 stable:1 entail:1 similarity:4 etc:1 add:1 base:1 yeates:1 massively:1 certain:1 binary:1 rep:1 conserved:1 seen:1 additional:4 surely:1 converge:2 determine:5 signal:4 semi:1 stephen:1 multiple:9 full:2 offer:1 hhmi:1 long:4 truong:1 lin:1 cross:1 tad:2 a1:1 laplacian:3 impact:1 variant:1 noiseless:1 vision:2 metric:1 arxiv:1 iteration:7 represent:3 cell:5 folding:2 addition:8 background:1 fine:1 want:1 fellowship:3 source:3 crucial:1 biased:1 tri:1 yue:1 subject:6 pooling:1 thing:1 effectiveness:2 seem:1 marcotte:1 near:2 presence:5 leverage:2 ideal:1 yang:1 zhuowen:1 fit:1 topology:2 perfectly:1 identified:4 imperfect:1 reduce:1 genet:4 absent:1 inactive:1 whether:1 expression:2 reuse:1 eigengap:10 penalty:2 resistance:1 opt4:1 york:1 cause:1 matlab:1 useful:1 collision:1 clear:2 eigenvectors:5 involve:1 amount:2 encyclopedia:1 dna:3 demir:1 canonical:1 stabilized:1 notice:1 tutorial:1 disjoint:2 track:1 blue:1 conform:1 alfred:1 group:1 key:4 donnelly:1 terminology:1 demonstrating:1 threshold:2 localize:1 diffusion:1 utilize:4 v1:2 graph:3 downstream:1 fraction:1 sum:4 luxburg:1 topologically:1 multilocus:1 extends:1 throughout:1 almost:1 decide:1 coherence:1 appendix:5 scaling:1 capturing:1 hi:25 internet:1 guaranteed:1 topological:2 quadratic:1 strength:2 constraint:3 looping:2 optimality:1 relatively:1 department:4 red:1 according:4 developing:1 combination:1 terminates:1 slightly:1 nmi:2 smaller:1 across:2 wi:1 pan:1 biologically:3 modification:1 explained:1 invariant:1 sij:12 pipeline:1 resource:1 remains:1 previously:1 mechanism:1 fail:1 needed:1 know:2 ascending:1 complies:1 serf:1 adopted:1 available:2 apply:3 observe:1 limb:1 away:1 appropriate:2 generic:1 spectral:3 shaw:1 alternative:1 robustness:2 eigen:3 denotes:1 clustering:11 include:2 ensure:1 top:2 graphical:1 kellis:1 contact:7 objective:8 malik:1 added:2 print:1 hum:2 strategy:1 degrades:1 unraveling:1 nr:1 unclear:1 enhances:1 gradient:3 regulated:1 subspace:5 distance:2 link:22 thank:1 simulated:4 manifold:1 unstable:1 trivial:1 spanning:1 length:1 index:1 remind:1 providing:1 minimizing:1 acquire:1 ratio:4 setup:1 steep:1 regulation:1 carbon:1 trace:1 negative:1 implementation:2 reliably:1 unknown:1 perform:1 inspect:1 observation:5 dispersion:1 markov:1 datasets:2 fin:1 fabian:1 descent:1 extended:2 rn:5 pritchard:1 community:34 intensity:1 pair:2 specified:1 toolbox:1 extensive:1 louvain:1 learned:1 barcelona:1 nip:2 able:2 suggested:1 usually:3 pattern:4 below:3 challenge:2 genetically:1 program:1 including:1 power:1 critical:2 overlap:1 natural:1 treated:1 regularized:1 indicator:1 scheme:1 improve:4 technology:3 started:1 acknowledges:1 extract:1 opt5:1 prior:2 understanding:1 l2:1 checking:1 circled:1 discovery:1 eisenberg:1 embedded:1 girvan:1 expect:1 mixed:1 interesting:2 filtering:1 foundation:1 incident:2 consistent:1 principle:3 grandvalet:1 pi:2 row:4 genetics:3 supported:4 free:1 copy:1 jth:1 bias:2 fall:1 saul:1 taking:3 emerge:1 sparse:3 boundary:10 dimension:1 genome:16 adopts:1 made:1 far:1 social:3 transcription:1 gene:6 global:3 active:2 overfitting:1 kkt:1 reveals:3 search:1 regulatory:6 iterative:1 quantifies:1 additionally:1 nature:5 learn:1 chromosome:9 jz:1 robust:1 mol:2 bottou:1 complex:3 upstream:1 necessarily:1 domain:18 substituted:1 promoter:1 whole:2 noise:27 denoise:7 site:1 advice:1 position:2 inferring:1 lie:1 jmlr:1 third:1 grained:1 coincided:1 down:1 specific:4 bishop:1 linden:1 fusion:3 consist:1 incorporating:1 albeit:1 false:2 adding:1 importance:1 magnitude:1 nat:3 gap:3 chen:1 suited:1 entropy:1 distinguishable:1 simply:1 cheaply:1 conveniently:1 ordered:1 partially:2 bo:2 recommendation:1 binding:5 truth:9 rice:1 ma:1 conditional:1 goal:1 viewed:1 adverse:1 directionality:1 change:1 determined:1 denoising:10 meaningful:4 indicating:3 select:1 formally:1 h3k27ac:2 mark:5 support:1 alexander:1 bioinformatics:1 phenomenon:1 incorporate:5 armstrong:1 biol:1 chromatin:7 |
5,849 | 6,292 | Linear Contextual Bandits with Knapsacks
Shipra Agrawal?
Nikhil R. Devanur?
Abstract
We consider the linear contextual bandit problem with resource consumption, in
addition to reward generation. In each round, the outcome of pulling an arm is
a reward as well as a vector of resource consumptions. The expected values of
these outcomes depend linearly on the context of that arm. The budget/capacity
constraints require that the total consumption doesn?t exceed the budget for each
resource. The objective is once again to maximize the total reward. This problem
turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic
packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret
bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and
the outcomes could be arbitrary, but the algorithm only competes against a fixed
set of policies accessible through an optimization oracle. We combine techniques
from the work on linContextual, BwK and OSPP in a nontrivial manner while also
tackling new difficulties that are not present in any of these special cases.
1 Introduction
In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or
features). In every round she needs to pull one out of K arms, after observing the context for that
round. The outcome of pulling an arm may be used along with the contexts to decide future arms.
Contextual bandit problems have found many useful applications such as online recommendation
systems, online advertising, and clinical trials, where the decision in every round needs to be
customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11]
is a special case of the contextual bandit problem, where the outcome is linear in the feature vector
encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way
point between supervised learning and reinforcement learning: the use of features to encode contexts
and the models for the relation between these feature vectors and the outcome are often inherited from
supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good
performance in reinforcement learning. The linear contextual bandit problem can thus be thought of
as a midway between the linear regression model of supervised learning, and reinforcement learning.
Recently, there has been a significant interest in introducing multiple ?global constraints? in the
standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world
applications. For example, in clinical trials, the treatment plans may be constrained by the total
availability of medical facilities, drugs and other resources. In online advertising, there are budget
constraints that restrict the number of times an ad is shown. Other applications include dynamic
pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples.
In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK)
problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown
distribution, and on picking an arm, a reward and a consumption vector is observed, which depend
?
?
Columbia University. [email protected].
Microsoft Research. [email protected].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
linearly on the context vector. The aim of the decision maker is to maximize the total reward while
ensuring that the total consumption of every resource remains within a given budget. Below, we give
a more precise definition of this problem. We use the following notational convention throughout:
vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper
case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All
vectors are column vectors, i.e., a vector in n dimensions is treated as an n ? 1 matrix. The transpose
of matrix A is A? .
Definition 1 (linCBwK). There are K ?arms?, which we identify with the set [K]. The algorithm
is initially given as input a budget B ? R+ . In every round t, the algorithm first observes context
xt (a) ? [0, 1]m for every arm a, and then chooses an arm at ? [K], and finally observes a reward
rt (at ) ? [0, 1] and a d-dimensional consumption vector vt (at ) ? [0, 1]d . The algorithm has a ?no-op?
option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the
PT
algorithm is to pick arms such that the total reward P
t=1 rt (at ) is maximized, while ensuring that
the total consumption does not exceed the budget, i.e., t vt (at ) ? B1.
We make the following stochastic assumption for context, reward, and consumption vectors. In
every round t, the tuple {xt (a), rt (a), vt (a)}K
a=1 is generated from an unknown distribution D,
independent of everything in previous rounds. Also, there exists an unknown vector ?? ? [0, 1]m and
a matrix W? ? [0, 1]m?d such that for every arm a, given contexts xt (a), and history Ht?1 before
time t,
?
(1)
E[rt (a)|xt (a), Ht?1 ] = ??
? xt (a), E[vt (a)|xt (a), Ht?1 ] = W? xt (a).
For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ? [0, 1]m?K ,
with xt (a) being the ath column of this matrix. Similarly, rewards and consumption vectors at time t
are represented as the vector rt ? [0, 1]K and the matrix Vt ? [0, 1]d?K respectively.
As we discuss later in the text, the assumption in equation (1) forms the primary distinction between
our linear contextual bandit setting and the general contextual bandit setting considered in [5].
Exploiting this linearity assumption will allow us to generate regret bounds which do not depend
on the number of arms K, rendering it to be especially useful when the number of arms is large.
Some examples of this include recommendation systems with large number of products (e.g., retail
products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the
general contextual bandit setting of [5] is that we don?t need an oracle access to a certain optimization
problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more
detailed discussion.)
We compare the performance of an algorithm to that of an optimal adaptive policy that knows the
distribution D and the parameters (?? , W? ), and can take into account the history up to that point,
as well as the current context, to decide (possibly with randomization) which arm to pull at time t.
However, it is easier to work with an upper bound on this, which is the optimal expected reward of a
static policy that is required to satisfy the constraints only in expectation. This technique has been
used in several related problems and is standard by now [14, 9].
Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy ? is a mapping from
context space [0, 1]m?K to ? = {p ? [0, 1]K : kpk1 ? 1}, where ?(X)i denotes the probability of
PK
playing arm i when the context is X, and 1 ? i=1 ?(X)i is the probability of no-op. Define r(?)
and v(?) to be the expected reward and consumption vector of policy ?, respectively, i.e.
r(?)
:=
v(?)
:=
Let ? ?
:=
E(X,r,V )?D [r?(X)] = EX?D [??
? X?(X)].
E(X,r,V )?D [V ?(X)] =
arg max?
EX?D [W?? X?(X)].
T r(?) such that T v(?) ? B1
(2)
(3)
(4)
be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We
denote the value of this optimal static policy by OPT := T r(? ? ).
The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is
in Appendix B in the supplement.
Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D
and parameters ?? , W? . Then OPT ? OPT.
2
Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as
regret(T ) := OPT ?
1.1
T
X
rt (at ).
t=1
Main results
Our main result is an algorithm with near-optimal regret bound for linCBwK.
Theorem 1. There is an algorithm for linCBwK such that if B > m1/2 T 3/4 , then with probability
at least 1 ? ?,
p
regret(T ) = O ( OPT
+
1)m
T
log(dT
/?)
log(T
)
.
B
Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between
contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set
of context dependent policies ? accessible
via an optimization oracle, with regret bounds being
p
OPT
O ( B + 1) KT log(dT |?|/?) . These approaches could potentially be applied to the linear
setting using a set ? p
of linear context dependent policies. Comparing their bounds with ours, in our
results, essentially a K log(|?|) factor is replaced by a factor of m. Most importantly, we have no
dependence on K,3 which enables us to consider problems with large action spaces.
Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the
form, for some fixed ? ? ?m ,
arg max {xt (a)? ?}.
a?[K]
Then, their algorithms would require access to an ?Arg-Max Oracle? that can find the best such policy
(maximizing total reward) for a given set of contexts and rewards (no resource consumption). In
fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the
optimization problem underlying such an ?Arg-Max Oracle? problem is NP-Hard, making such an
approach computationally expensive. The proof of this is in Appendix C in the supplement.
The only downside to our results is that we?need the budget B to be ?(m1/2 T 3/4 ). Getting similar
bounds for budgets as small as B = ?(m T ) is an interesting open problem. (This also indicates
that this is indeed a harder problem than all the special cases.)
Near-optimality of regret bounds. In [12], it was shown that for the linear
contextual bandits
?
problem, no online algorithm can achieve a regret bound better than ?(m T ). In fact, they prove
this lower bound for linear contextual bandits with static contexts. Since that problem is a special
case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above
regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the
bounds of [5, 10] are near optimal.
Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization
of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is
to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a
common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the
online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the
outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize
PT
PT
t=1 rt while ensuring that
t=1 vt ? B1. The problems differ in how these rewards and vectors
are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair
from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown
distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d + 1
and the context xt (a) itself is equal to (rt (a), vt (a)). In the BwK problem, there is a fixed set of
arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm
picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This
3
Similar to the regret bounds for linear contextual bandits [8, 1, 11].
3
corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity
matrix, for all t.
We use techniques from all three special cases: our algorithms follow the primal-dual paradigm
and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal
with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W? , and define
?optimistic estimates? of W? . We also use the technique of combining the objective and the constraints
using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such
as in estimating the optimum value from the first few rounds, a task that follows from standard
techniques in each of the special cases but is very challenging here. We develop a new way of
exploration that uses the linear structure, so that one can evaluate all possible choices that could
have led to an optimum solution on the historic sample. This technique might be of independent
interest in estimating optimum values. One can see that the problem is indeed more than the sum of
? 1/2 T 3/4 ),
its parts, from the fact that we get the optimal bound for linCBwK only when B ? ?(m
B
(but
is
meaningful
only for
unlike either
special
case
for
which
the
optimal
bound
holds
for
all
?
?
B = ?(m
T )).
The approach in [3] (for BwK) extends to the case of ?static? contexts,4 where each arm has a context
that doesn?t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts;
this is one indication of the additional difficulty of dynamic over static contexts.
?
Other related work. Recently, [17] showed an O( T ) regret in the linear contextual setting with
a single budget constraint, when costs depend only on contexts and not arms.
Due to space constraints, we have moved many proofs from the main part of the paper to the
supplement.
2
2.1
Preliminaries
Confidence Ellipsoid
Consider a stochastic process which in each round t, generates a pair of observations (rt , y t ), such
that rt is an unknown linear function of y t plus some 0-mean bounded noise, i.e., rt = ??
? y t + ?t ,
where y t , ?? ? Rm , |?t | ? 2R, and E[?t |y 1 , r1 , . . . , y t?1 , rt?1 , y t ] = 0.
At any time t, a high confidence estimate of the unknown vector ?? can be obtained by building
? t constructed from the
a ?confidence ellipsoid? around the ?2 -regularized least-squares estimate ?
observations made so far. This technique is common in prior work on linear contextual bandits (e.g.,
in [8, 11, 1]). For any regularization parameter ? > 0, let
P
Pt?1
? t := Mt?1 t?1
Mt := ?I + i=1 y i y ?
i , and ?
i=1 y i ri .
? t.
The following result from [1] shows that ?? lies with high probability in an ellipsoidpwith center ?
?
For any positive semi-definite (PSD) matrix M, define the M -norm as k?kM := ? M ?. The
confidence ellipsoid at time t is defined as
n
p
? o
? t kMt ? R m log ((1+tm/?)/?) + ?m .
Ct := ? ? Rm : k? ? ?
Lemma 2 (Theorem 2 of [1]). If ? t, k?? k2 ?
?? ? Ct .
?
m and ky t k2 ?
?
m, then with prob. 1 ? ?,
Another useful observation about this construction is stated below. It first appeared as Lemma 11 of
[8], and was also proved as Lemma 3 in [11].
p
PT
Lemma 3 (Lemma 11 of [8]).
mT log(T ).
t=1 ky t kM ?1 ?
t
As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided
by ?any point? from the confidence ellipsoid. (Proof is in Appendix D in the supplement.)
4
It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much
modifications.
4
? t ? Ct be a point in the confidence ellipsoid, with ? = 1 and
Corollary 1. For t = 1, . . . , T , let ?
2R = 1. Then, with probability 1 ? ?,
p
PT
?
(1+T m)/? ) log(T ).
??
t y t ? ?? y t | ? 2m T log (
t=1 |?
2.2
Online Learning
Consider a T round game played between an online learner and an adversary, where in round
t, the learner chooses a ? t ? ? := {? : k?k1 ? 1, ? ? 0}, and then observes a linear function
gt : ? ? [?1, 1] picked by the adversary. The learner?s choice ? t may only depend on learner?s and
adversary?s choices in previous rounds. The goal of the learner is to minimize regret defined as the
difference between the learner?s objective value and the value of the best single choice in hindsight:
PT
PT
R(T ) := max??? t=1 gt (?) ? t=1 gt (? t ).
The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient
online learning algorithm for this problem. Let gt,j := gt (1j ). Then, given a parameter ? > 0, in
round t + 1, the choice of this algorithm takes the following form,
wt,j
wt?1,j (1 + ?)gt,j
if gt,j > 0,
P
(5)
, where wt,j =
? t+1,j =
wt?1,j (1 ? ?)?gt,j if gt,j ? 0.
1 + j wt,j
with initialization w0,j = 1, for all j = 1, . . . , K.
Lemma 4. [7] For any 0 < ? ? 12 , the MWU algorithm provides the following regret bound for the
online learning problem described above:
In particular, for ? =
q
.
R(T ) ? ?T + log(d+1)
?
p
log(d+1)
, we have R(T ) ? log(d + 1)T
T
q
For the rest of the paper, we refer to the MWU algorithm with ? = log(d+1)
as the online learning
T
(OL) algorithm, and the update in (5) as the OL update at time t + 1.
3 Algorithm
3.1
Optimistic estimates of unknown parameters
Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the
outcomes and contexts from previous rounds to construct a confidence ellipsoid for ?? and every
column of W? . The construction of confidence ellipsoid for ?? follows directly from the techniques
in Section 2.1 with yt = xt (at ) and rt being reward at time t. To construct a confidence ellipsoid
for a column j of W? , we use the techniques in Section 2.1 while substituting y t = xt (at ) and
rt = vt (at )j for every j.
Pt?1
As in Section 2.1, let Mt := I + i=1 xi (ai )xi (ai )? , and construct the regularized least squares
estimate for ?? , W? , respectively, as
P
?
? t := Mt?1 t?1
?
(6)
i=1 xi (ai )ri (ai )
P
t?1
?1
?
? t := Mt
xi (ai )vi (ai ) .
(7)
W
i=1
Define confidence ellipsoid for parameter ?? as
n
p
? o
? Mt ? m log ((d+tmd)/?) + m ,
Ct,0 := ? ? Rm : k? ? ?k
and for every arm a, the optimistic estimate of ?? as:
? t (a) := arg max??Ct,0 xt (a)? ?.
?
(8)
Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as
n
p
? o
Ct,j := w ? Rm : kw ? w
? tj kMt ? m log ((d+tmd)/?) + m ,
5
and denote by Gt , the Cartesian product of all these ellipsoids: Gt := {W ? Rm?d : wj ? Ct,j }.
Note that Lemma 2 implies that W? ? Gt with probability 1 ? ?. Now, given a vector ? t ? Rd , we
define the optimistic estimate of the weight matrix at time t w.r.t. ? t , for every arm a ? [K], as :
? t (a) := arg minW ?G xt (a)? W ? t .
W
t
(9)
Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a
lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions,
where the maximizer was used in case of reward and a minimizer was used for consumption. The
utility and precise meaning of ? t will become clearer when we describe the algorithm and present the
regret analysis.
? t , along with the results in Lemma 2 and Corollary 1 about confidence
? t, W
Using the definition of ?
ellipsoids, the following can be derived.
Corollary 2. With probability 1 ? ?, for any sequence of ? 1 , ? 2 , . . . , ? T ,
? t (a) ? xt (a)? ?? , for all arms a ? [K], for all time t.
1. xt (a)? ?
? t (a)? t ? xt (a)? W? ? t , for all arms a ? [K], for all time t.
2. xt (a)? W
p
PT
? t (at ) ? ?? )? xt (at )| ? 2m T log ((1+tm)/?) log(T ) .
3. | t=1 (?
4. k
PT
?
t=1 (Wt (at )
p
? W? )? xt (at )k ? k1d k 2m T log ((d+tmd)/?) log(T ) .
Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims
ensure that the estimates quickly converge to the true parameters.
3.2
The core algorithm
In this section, we present an algorithm and its analysis, under the assumption that a parameter Z
satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such
a Z, and also bound the additional regret due to these T0 rounds. We define Z now.
OPT
?
Assumption 1. Let Z be such that for some universal constants c, c? , OPT
B ?Z ?c B +c .
? t as in Section 3.1. It also runs the OL algorithm for an
? t and W
The algorithm constructs estimates ?
instance of the online learning problem. The vector played by the OL algorithm in time step t is ? t .
After observing the context, the optimistic estimates for each arm are then constructed using ? t , as
defined in (8) and (9). Intuitively, ? t is used here as a multiplier to combine different columns of
the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward
for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with
? t (a)? t ). The algorithm
? t (a)) ? Z(xt (a)? W
the optimistic estimate of the consumption, as (xt (a)? ?
chooses the arm which appears to be the best according to the adjusted estimated reward. After
observing the resulting reward and consumption vectors, the estimates are updated. The online
learning algorithm is advanced by one step, by defining the profit vector to be vt (at ) ? B
T 1. The
algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along
some dimension.
Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1 ? ?:
p
regret(T ) ? O ( OPT
B + 1)m T log(dT /?) log(T ) .
(Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the
supplement. Let ? be the stopping time of the algorithm. The proof is in 3 steps:
?
Step 1: Since E[vt (at )|Xt
, aP
t , Ht?1 ] = W? xt (at ), we
apply Azuma-Hoeffding inequality to get
?
?
is small. Therefore, we can work with
that with high probability
v
(a
)
?
W
x
(a
)
t
t
t
t
?
?
P?
Pt=1
?
?
v
(a
).
A
similar
application
of Azuma-Hoeffding
inequality
t
t
t=1 W? xt (at ) instead of
t=1
P?
P?
?
?
is used to bound the gap | t=1 rt (at ) ?P
?? xt (at )|, so that a lower bound on t=1 ?? xt (at ) is
?
sufficient to lower bound the total reward t=1 rt (at ).
6
Algorithm 1 Algorithm for linCBwK, with given Z
Initialize ? 1 as per the online learning (OL) algorithm.
Initialize Z that satisfies Assumption 1.
for all t = 1, ..., T do
Observe Xt .
? t (a) as per (8) and (9) respectively.
? t (a) and W
For every a ? [K], compute ?
? t (a)? t ).
? t (a) ? Z W
Play the arm at := arg maxa?[K] xt (a)? (?
Observe rt (at ) and vP
(a
).
t t
If for some j = 1..d, t? ?t vt? (at? ) ? ej ? B then EXIT.
? t+1 and Gt+1 .
? t+1 , W
Use xt (at ), rt (at ) and vt (at ) to obtain ?
Choose ? t+1 using the OL update (refer to (5)) with gt (? t ) := ? t ? vt (at ) ?
end for
B
T1
.
P
T
? t (at ))? xt (at )
Step 2: Using Corollary 2, with high probability, we can bound
t=1 (W? ? W
.
?
?
?
?
It is therefore sufficient to work with the sum of vectors Wt (at ) xt (at ) instead of W? xt (at ), and
? t (at )? xt (at ) instead of ??
similarly with ?
? xt (at ).
P?
? t (at )? xt (at ). This
Step 3: The proof is completed by showing the desired bound on OPT ? t=1 ?
part is similar to the online stochastic packing problem; if the actual reward and consumption vectors
? t (at )? xt (at ), then it would be exactly that problem. We adapt techniques
? t (at )? xt (at ) and W
were ?
from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a
dimension is being consumed too fast, then the multiplier for that dimension should increase, making
the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is
then bounded by a combination of the online learning regret and the error in the optimistic estimates.
3.3
Algorithm with Z computation
In this section, we present a modification of Algorithm 1 which computes the required parameter
Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The
algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm
1 can be run for the remaining time steps. However, it needs to be modified slightly to take into
account the budget consumed during the first T0 rounds. We handle this by using a smaller budget
B ? = B ? T0 in the computations for the remaining rounds. The modified algorithm is given below.
Algorithm 2 Algorithm for linCBwK, with Z computation
Inputs: B, T0 , B ? = B ? T0
Using observations from first T0 rounds, compute a Z that satisfies Assumption 1.
Run Algorithm 1 for T ? T0 rounds and budget B ? .
Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how
to choose T0 . We provide a method that takes advantage of the linear structure of the problem, and
explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of
K. In every round t = 1, . . . , T0 , after observing Xt , let pt ? ?[K] be
pt
:=
arg max kXt pkM ?1 ,
where Mt
:=
I+
p??[K]
Pt?1
(10)
t
i=1 (Xi pi )(Xi pi )
?
.
(11)
Select arm at = a with probability pt (a). In fact, since Mt is a PSD matrix, due to convexity of the
function kXt pk2M ?1 , it is the same as playing at = arg maxa?[K] kxt (a)kM ?1 . Construct estimates
t
t
? t of ?? , W? at time t as
? W
?,
? t := Mt?1
?
Pt?1
i=1 (Xi pi )ri (ai ),
? t := Mt?1 Pt?1 (Xi pi )vi (ai )? .
W
i=1
7
? ? of OPT as:
And, for some value of ? defined later, obtain an estimate OPT
PT0 ?
T
? i Xi ?(Xi )
max?
i=1 ?
T0
? ? :=
P
OPT
T0 ? ?
Wi Xi ?(Xi ) ? B + ?.
such that TT0 i=1
(12)
For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every
? t,
column w?j of W? is guaranteed to lie inside the confidence ellipsoid centered at column w
? tj of W
namely the ellipsoid, kw ? w
? tj k2Mt ? 4m log(T m/?). Note that this ellipsoid has principle axes as
eigenvectors of Mt , and the length of the semi-principle axes is given by the inverse eigenvalues of
Mt . Therefore, by maximizing kXt pkM ?1 we are choosing the context closest to the direction of the
t
longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty.
Intuitively, this corresponds to pure exploration: by making an observation in the direction where
uncertainty is large we can reduce the uncertainty in our estimate most effectively.
?
? , we want
A more algebraic explanation is as follows. In order to get a good estimate of OPT by OPT
P 0 ?
? ? )? Xt ?(Xt )k?
? t and W? (and, ?
? and ?? ) to be close enough so that k Tt=1
( Wt ? W
the estimates W
PT0
? t ? ?? )? Xt ?(Xt )|) is small for all policies ?, and in particular for sample optimal
(and, | t=1
(?
policies. Now, using Cauchy-Schwartz these are bounded by
PT0
? t ? ?? kMt kXt ?(Xt ))kM ?1 , and
t=1 k?
t
PT0
? t ? W? kM kXt ?(Xt ))k ?1 ,
k
W
t
t=1
Mt
where we define kW kM , the M -norm of matrixp
W to be the max of column-wise M -norms. Using
? t ?W? kM is bounded by
? t ??? kMt is bounded by 2 m log(T0 m/?) , and kW
Lemma 2, the term k?
p
PT0 t
2 m log(T0 md/?), with probability 1??. Lemma 3 bounds the second term t=1
kXt ?(Xt )kM ?1
t
but only when ? is the played policy. This is where we use that the played policy pt was choP T0
P T0
sen to maximize kXt pt kM ?1 , so that t=1 kXt ?(Xt )kM ?1 ? t=1 kXt pt kM ?1 and the bound
t
p t
PT0
PT0 t
?1 ?
kX
p
k
mT
log(T
)
given
by
Lemma
3
actually
bounds
t
t
0
0
t=1
t=1 kXt ?(Xt )kMt?1
Mt
p
PT0 ?
for all ?. Combining, we get a bound of 2m T0 log(T0 ) log(T0 d/?) on deviations k t=1
( Wt ?
P
?
? ? )? Xt ?(Xt )k? and | T0 (?
?
W
?
?
)
X
?(X
)|
for
all
?.
t
t
t
?
t=1
We prove the following lemma.
p
Lemma 5. For ? = TT0 2m T0 log(T0 ) log(T0 d/?), with probability 1 ? O(?),
?
OPT ? 2? ? OPT
Corollary 3. Set Z =
? 2? +2?)
(OPT
B
OPT
B
2?
? OPT + 9?( OPT
B + 1).
+ 1, with the above value of ?. Then, with probability 1 ? O(?),
+ 1 ? Z ? (1 +
11?
OPT
B )( B
+ 1).
mT
? ?
), Z is a constant factor approximation of
Corollary 3 implies that as long as B ? ?, i.e., B ? ?(
T0
?
OPT
?
? ( OPT + 1)m T regret bound. However,
O
+
1
?
Z
,
therefore
Theorem
2
should
provide
an
B
B
this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most)
T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of
OPT
?
B T0 . Further, since we have B = B ? T0 budget for remaining T ? T0 rounds, we need a Z that
OPT
satisfies the required assumption for B ? instead of B (i.e., we need OPT
B ? ? Z ? O(1) B ? + 1 ).
If B ? 2T0 , then, B ? ? B/2, and using 2 times the Z computed in Corollary 3 would satisfy the
required assumption.
Together, these observations give Theorem 3.
?
Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0 , mT / T0 }, and twice the Z given
by Corollary 3, we get a high probability regret bound of
?
? OPT + 1 T0 + m T
.
O
B
?
?
In particular, for B > m1/2 T 3/4 and m ? T , we can use T0 = m T to get a regret bound of
?
? OPT + 1 m T .
O
B
8
References
[1] Y. Abbasi-Yadkori, D. P?al, and C. Szepesv?ari. Improved algorithms for linear stochastic bandits.
In NIPS, 2012.
[2] A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. E. Schapire. Taming the monster: A
fast and simple algorithm for contextual bandits. In ICML 2014, June 2014.
[3] S. Agrawal and N. R. Devanur. Bandits with concave rewards and convex knapsacks. In
Proceedings of the Fifteenth ACM Conference on Economics and Computation, EC ?14, 2014.
[4] S. Agrawal and N. R. Devanur. Fast algorithms for online stochastic convex programming. In
SODA, pages 1405?1424, 2015.
[5] S. Agrawal, N. R. Devanur, and L. Li. An efficient algorithm for contextual bandits with
knapsacks, and an extension to concave objectives. In COLT, 2016.
[6] S. Agrawal, Z. Wang, and Y. Ye. A dynamic near-optimal algorithm for online linear programming. Operations Research, 62:876 ? 890, 2014.
[7] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm
and applications. Theory of Computing, 8(6):121?164, 2012.
[8] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res.,
3, Mar. 2003.
[9] A. Badanidiyuru, R. Kleinberg, and A. Slivkins. Bandits with knapsacks. In FOCS, pages
207?216, 2013.
[10] A. Badanidiyuru, J. Langford, and A. Slivkins. Resourceful contextual bandits. In Proceedings
of The Twenty-Seventh Conference on Learning Theory (COLT-14), pages 1109?1134, 2014.
[11] W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual Bandits with Linear Payoff Functions.
In AISTATS, 2011.
[12] V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic Linear Optimization under Bandit Feedback.
In COLT, 2008.
[13] N. R. Devanur and T. P. Hayes. The adwords problem: online keyword matching with budgeted
bidders under random permutations. In EC, 2009.
[14] N. R. Devanur, K. Jain, B. Sivan, and C. A. Wilkens. Near optimal online algorithms and fast
approximation algorithms for resource allocation problems. In EC, 2011.
[15] J. Feldman, M. Henzinger, N. Korula, V. S. Mirrokni, and C. Stein. Online stochastic packing
applied to display ad allocation. In Proceedings of the 18th Annual European Conference on
Algorithms: Part I, ESA?10, 2010.
[16] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. SIAM Journal
on Computing, 39(2):742?765, 2009.
[17] H. Wu, R. Srikant, X. Liu, and C. Jiang. Algorithms with logarithmic or sublinear regret for
constrained contextual bandits. In Proceedings of the 28th International Conference on Neural
Information Processing Systems (NIPS), 2015.
9
| 6292 |@word trial:2 exploitation:2 version:1 norm:3 open:1 km:11 pick:5 profit:1 harder:1 reduction:1 liu:1 pt0:8 k1d:1 ours:1 current:1 contextual:27 com:1 comparing:1 wilkens:1 tackling:1 chu:1 midway:1 enables:1 sponsored:1 update:5 half:1 beginning:1 core:1 provides:1 along:4 constructed:2 become:1 focs:1 prove:2 combine:4 inside:1 manner:1 indeed:2 hardness:1 expected:3 ol:7 actual:1 considering:1 spain:1 estimating:2 competes:1 linearity:1 underlying:1 bounded:5 provided:2 maxa:2 tmd:3 hindsight:1 every:19 concave:2 exactly:1 rm:5 k2:2 schwartz:1 medical:1 before:1 positive:1 t1:1 encoding:1 k2mt:1 mach:1 jiang:1 ap:1 might:1 plus:1 twice:1 initialization:1 studied:1 challenging:1 regret:25 definite:1 universal:1 drug:1 thought:1 matching:1 confidence:17 regular:1 get:8 close:1 context:35 center:1 maximizing:2 yt:1 kale:2 economics:1 devanur:6 convex:2 unstructured:1 pure:1 importantly:1 pull:2 classic:1 handle:1 crowdsourcing:1 updated:1 pt:21 suppose:1 construction:2 user:1 play:1 programming:2 us:1 expensive:1 satisfying:1 observed:1 monster:1 wang:1 wj:2 keyword:1 trade:1 observes:4 halfspaces:2 intuition:2 convexity:1 reward:29 dynamic:5 depend:5 badanidiyuru:2 exit:1 learner:6 shipra:1 packing:4 represented:1 jain:1 fast:5 describe:1 outcome:10 choosing:1 solve:2 nikhil:1 consume:1 itself:1 online:22 agrawal:5 sequence:2 advantage:2 indication:1 kxt:11 eigenvalue:1 sen:1 product:3 ath:1 combining:2 reyzin:1 achieve:1 moved:1 ky:2 getting:1 exploiting:1 optimum:3 r1:1 develop:1 clearer:1 op:3 implies:2 convention:1 differ:1 direction:3 stochastic:9 exploration:4 centered:1 everything:1 require:2 resourceful:1 generalization:4 preliminary:1 randomization:1 opt:31 adjusted:2 extension:1 hold:1 around:1 considered:2 mapping:1 claim:2 substituting:1 achieves:1 travel:1 maker:2 dani:1 offs:1 always:1 aim:1 modified:2 ej:1 corollary:9 encode:1 derived:1 ax:2 june:1 korula:1 she:1 notational:1 longest:1 indicates:1 mwu:3 dependent:3 stopping:1 initially:1 bandit:33 relation:5 arg:9 dual:2 colt:3 denoted:2 plan:1 constrained:2 special:10 initialize:2 equal:1 once:2 never:1 construct:5 kw:4 icml:1 future:1 np:2 few:1 replaced:1 microsoft:2 psd:2 interest:2 primal:1 tj:3 kt:1 tuple:2 necessary:1 minw:1 desired:1 re:1 instance:1 column:10 downside:1 cost:1 introducing:1 deviation:1 seventh:1 too:2 chooses:3 explores:1 siam:1 international:1 accessible:2 picking:1 together:1 quickly:1 again:1 abbasi:1 choose:2 possibly:1 hoeffding:2 henceforth:1 li:3 account:3 bidder:1 bold:2 availability:1 satisfy:2 ad:3 vi:2 later:3 multiplicative:2 picked:2 optimistic:11 observing:4 hazan:1 option:1 inherited:1 minimize:1 square:2 maximized:1 identify:1 vp:1 raghavendra:1 none:1 advertising:2 served:1 history:2 aligns:1 facebook:1 definition:6 against:1 henzinger:1 proof:9 static:8 hsu:1 proved:1 treatment:1 actually:1 auer:1 appears:1 dt:3 supervised:3 follow:1 improved:1 done:1 mar:1 langford:2 sketch:2 maximizer:1 pulling:2 tt0:2 pricing:1 building:1 ye:1 succinctness:1 true:1 multiplier:2 facility:1 regularization:1 deal:1 round:30 game:1 during:1 chop:1 tt:1 meaning:1 wise:1 recently:2 ari:1 common:3 mt:18 m1:3 significant:1 refer:2 feldman:1 ai:8 rd:1 similarly:2 pointed:1 kmt:5 access:2 etc:2 gt:14 matrixp:1 closest:1 recent:1 showed:1 claimed:1 certain:3 inequality:2 meta:1 vt:14 additional:3 managing:1 converge:1 maximize:5 paradigm:1 semi:2 multiple:1 full:1 exceeds:1 adapt:1 clinical:2 long:1 post:1 ensuring:3 regression:1 essentially:2 expectation:1 fifteenth:1 represent:1 agarwal:1 retail:1 addition:1 want:4 remarkably:1 szepesv:1 crucial:1 rest:1 unlike:1 pkm:2 near:6 exceed:2 easy:1 enough:1 rendering:1 restrict:1 reduce:1 tm:2 tradeoff:2 consumed:4 t0:37 utility:1 guruswami:1 algebraic:1 action:1 useful:3 detailed:1 eigenvectors:1 amount:1 stein:1 generate:1 schapire:2 srikant:1 estimated:2 per:3 sivan:1 drawn:2 budgeted:1 ht:4 sum:3 compete:1 package:1 letter:2 prob:2 run:3 inverse:1 uncertainty:3 soda:1 extends:1 throughout:1 adwords:1 decide:2 wu:1 decision:3 appendix:4 bound:37 ct:7 guaranteed:1 played:6 display:1 oracle:5 annual:1 nontrivial:1 constraint:10 kpk1:1 ri:3 generates:1 kleinberg:1 optimality:1 according:1 combination:1 smaller:1 slightly:1 wi:1 kakade:1 making:3 modification:2 intuitively:3 computationally:1 resource:8 equation:1 remains:1 turn:2 discus:1 know:2 end:2 operation:1 apply:1 observe:3 upto:1 yadkori:1 knapsack:7 denotes:1 remaining:3 ensure:3 include:2 completed:1 k1:1 especially:1 prof:1 objective:5 quantity:1 primary:1 rt:19 dependence:2 md:1 mirrokni:1 capacity:1 consumption:18 w0:1 cauchy:1 length:1 ellipsoid:16 potentially:1 favorably:1 stated:1 policy:20 unknown:8 twenty:1 upper:4 observation:8 incorrectly:1 defining:1 extended:1 payoff:1 bwk:7 precise:2 arbitrary:3 esa:1 introduced:1 pair:4 required:5 namely:1 slivkins:2 distinction:1 barcelona:1 nip:3 adversary:3 below:3 azuma:2 appeared:1 max:10 explanation:1 difficulty:3 natural:1 treated:1 regularized:2 customized:1 advanced:1 arm:37 axis:1 arora:1 columbia:2 faced:1 text:1 prior:1 taming:1 historic:1 permutation:1 sublinear:1 generation:1 interesting:1 allocation:2 sufficient:2 principle:2 playing:2 pi:4 last:1 transpose:1 soon:1 allow:1 face:2 feedback:1 dimension:5 world:1 doesn:2 computes:2 made:1 reinforcement:3 adaptive:4 far:1 ec:3 global:1 hayes:2 b1:3 xi:12 don:1 search:1 learn:1 european:1 aistats:1 pk:1 main:3 linearly:3 noise:3 arise:1 allowed:1 lie:2 procurement:1 theorem:6 xt:52 showing:1 exists:2 effectively:1 supplement:5 budget:16 cartesian:1 kx:1 gap:1 easier:1 led:1 logarithmic:1 simply:1 likely:1 scalar:2 recommendation:2 corresponds:3 minimizer:1 satisfies:4 acm:1 goal:4 identity:1 feasible:1 hard:2 change:1 wt:9 lemma:16 principal:1 total:11 meaningful:1 select:1 evaluate:1 ex:2 |
5,850 | 6,293 | Variance Reduction in Stochastic Gradient
Langevin Dynamics
Avinava Dubey? , Sashank J. Reddi? , Barnab?as P?oczos, Alexander J. Smola, Eric P. Xing
Department of Machine Learning
Carnegie-Mellon University
Pittsburgh, PA 15213
{akdubey, sjakkamr, bapoczos, alex, epxing}@cs.cmu.edu
Sinead A. Williamson
IROM/Statistics and Data Science
University of Texas at Austin
Austin, TX 78712
[email protected]
Abstract
Stochastic gradient-based Monte Carlo methods such as stochastic gradient
Langevin dynamics are useful tools for posterior inference on large scale datasets
in many machine learning applications. These methods scale to large datasets by
using noisy gradients calculated using a mini-batch or subset of the dataset. However, the high variance inherent in these noisy gradients degrades performance and
leads to slower mixing. In this paper, we present techniques for reducing variance
in stochastic gradient Langevin dynamics, yielding novel stochastic Monte Carlo
methods that improve performance by reducing the variance in the stochastic gradient. We show that our proposed method has better theoretical guarantees on
convergence rate than stochastic Langevin dynamics. This is complemented by
impressive empirical results obtained on a variety of real world datasets, and on
four different machine learning tasks (regression, classi?cation, independent component analysis and mixture modeling). These theoretical and empirical contributions combine to make a compelling case for using variance reduction in stochastic
Monte Carlo methods.
1
Introduction
Monte Carlo methods are the gold standard in Bayesian posterior inference due to their asymptotic
convergence properties; however convergence can be slow in large models due to poor mixing.
Gradient-based Monte Carlo methods such as Langevin Dynamics and Hamiltonian Monte Carlo
[10] allow us to use gradient information to more ef?ciently explore posterior distributions over
continuous-valued parameters. By traversing contours of a potential energy function based on the
posterior distribution, these methods allow us to make large moves in the sample space. Although
gradient-based methods are ef?cient in exploring the posterior distribution, they are limited by the
computational cost of computing the gradient and evaluating the likelihood on large datasets. As a
result, stochastic variants are a popular choice when working with large data sets [15].
Stochastic gradient methods [13] have long been used in the optimization community to decrease
the computational cost of gradient-based optimization algorithms such as gradient descent. These
methods replace the (expensive, but accurate) gradient evaluation with a noisy (but computationally
? denotes equal contribution
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
cheap) gradient evaluation on a random subset of the data. With appropriate scaling, this gradient
evaluated on a random subset of the data acts as a proxy for the true gradient. A carefully designed
schedule of step sizes ensures convergence of the stochastic algorithm.
A similar idea has been employed to design stochastic versions of gradient-based Monte Carlo methods [15, 1, 2, 9]. By evaluating the derivative of the log likelihood on only a small subset of data
points, we can drastically reduce computational costs. However, using stochastic gradients comes at
a cost: While the resulting estimates are unbiased, they do have very high variance. This leads to an
increased probability of selecting paths with high deviation from the true gradient, leading to slower
convergence.
There have been a number of variations proposed on the basic stochastic gradient Langevin dynamics
(S GLD) model of [15]: [4] incorporates a momentum term to improve posterior exploration; [6]
proposes using additional variables to stabilize ?uctuations; [12] proposes modi?cations to facilitate
exploration of simplex; [7] provides sampling solutions for correlated data. However, none of these
methods directly tries to reduce the variance in the computed stochastic gradient.
As was the case with the original S GLD algorithm, we look to the optimization community for
inspiration, since high variance is also detrimental in stochastic gradient based optimization. A
plethora of variance reduction techniques have recently been proposed to alleviate this issue for
the stochastic gradient descent (S GD) algorithm [8, 5, 14]. By incorporating a carefully designed
(usually unbiased) term into the update sequence of S GD, these methods reduce the variance that
arises due to the stochastic gradients in S GD, thereby providing strong theoretical and empirical
performance.
Inspired by these successes in the optimization community, we propose methods for reducing the
variance in stochastic gradient Langevin dynamics. Our approach bridges the gap between the faster
(in terms of iterations) convergence of non-stochastic Langevin dynamics, and the faster per-iteration
speed of S GLD. While our approach draws its motivation from the stochastic optimization literature,
it is to our knowledge the ?rst approach that aims to directly reduce variance in a gradient-based
Monte Carlo method. While our focus is on Langevin dynamics, our approach is easily applicable
to other gradient-based Monte Carlo methods.
Main Contributions: We propose a new Langevin algorithm designed to reduce variance in the
stochastic gradient, with minimal additional computational overhead. We also provide a memory
ef?cient variant of our algorithm. We demonstrate theoretical conversion to the true posterior under
reasonable assumptions, and show that the rate of convergence has a tighter bound than one previously shown for S GLD. We complement these theoretical results with empirical evaluation showing
impressive speed-ups versus a standard S GLD algorithm, on a variety of machine learning tasks such
as regression, classi?cation, independent component analysis and mixture modeling.
2
Preliminaries
N
Let X = {xi }N
i=1 be a set of data items modeled using a likelihood function p(X|?) =
i=1 p(xi |?)
where the parameter ? has prior distribution p(?). We are interested in sampling from the posterior
N
distribution p(?|X) ? p(?) i=1 p(xi |?). If N is large, standard Langevin Dynamics is not feasible
due to the high cost of repeated gradient evaluations; a more scalable approach is to use a stochastic
variant [15], which we will refer to as stochastic gradient Langevin dynamics, or S GLD. S GLD uses
a classical Robbins-Monro stochastic approximation to the true gradient [13]. At each iteration t of
the algorithm, a subset Xt = {xt1 , . . . , xtn } of the data is sampled and the parameters are updated
by using only this subset of data, according to
n
(1)
??t = h2t ? log p(?t ) + N
i=1 ? log p(xti |?t ) + ?t
n
?
where ?t ? N (0, ht ), and ht is the learning rate. ht is set in such a fashion that t=1 ht = ? and
?
2
t=1 ht < ?. This provides an approximation to a ?rst order Langevin diffusion, with dynamics
d? = ? 12 ?? U dt + dW,
(2)
where U is the unnormalized
negative log posterior. Equation 2 has stationary distribution ?(?) ?
exp{?U (?)}. Let ?? = ?(?)?(?)d? where ? represents a test function of interest. For a numerical
2
T ?1
?1
method that generates samples {?t }Ti=0
, let ?? denote the empirical average T1 t=0 ?(?t ). Fur? where L is the generator
thermore, let ? denote the solution to the Poisson equation L? = ? ? ?,
of the diffusion, given by
(3)
L? = ?? ?, ?? U + 12 i ?2i ?.
The decreasing step size ht in our approximation (Equation 1) means we do not have to incorporate
a Metropolis-Hastings step to correct for the discretization error relative to Equation 2; however it
comes at the cost of slowing the mixing rate of the algorithm. We note that, while the discretized
Langevin diffusion is Markovian, its convergence guarantees rely on the quality of the approximation, rather than from standard Markov chain Monte Carlo analyses that rely on this Markovian
property.
A second source of error comes from the use of stochastic approximations to the true gradients. This
is equivalent to using an approximate generator L?t = L + ?Vt , where ?Vt = (?Ut ? ?U ) ? ?
and ?Ut is the current stochastic approximation to ?U . The key contribution of this paper will
be replacing the Robbins-Monro approximation to U with a lower-variance approximation, thus
reducing the error.
To see more clearly the effect of the variance of our stochastic approximation on the estimator error,
we present a result derived for S GLD by [3]:
Theorem 1. [3] Let Ut be an unbiased estimate of U and ht = h for all t ? {1, . . . , T }. Then
under certain reasonable assumptions (concretely, assumption [A1] in Section 4), for a smooth test
function ?, the MSE of S GLD at time K = hT is bounded, for some C > 0 independent of (T, h) in
the following manner:
?
?
1
2
?
?
? 2 ? C ? T t E[
?Vt
] + 1 + h2 ? .
(4)
E(?? ? ?)
?
?
T
Th
T1
Here
.
represents the operator norm.
We clearly see that the MSE depends on the variance term E[
?Vt
2 ], which in turn depends on the
variance of the noisy stochastic gradients. Since, for consistency, we require h ? 0 as T ? ?,1
provided E[
?Vt
2 ] is bounded by a constant ? , the term T1 ceases to dominate as T ? ?,
meaning that the effect of noise in the stochastic gradient becomes negligible. However outside this
asymptotic regime, the effect of the variance term in Equation 4 remains signi?cant. This motivates
our efforts in this paper to decrease the variance of the approximate gradient, while maintaining an
unbiased estimator.
An easy to decrease the variance is by using larger minibatches. However, this comes at a considerable computational cost, undermining the whole bene?t of using S GLD. Inspired by the recent
success of variance reduction techniques in stochastic optimization [14, 8, 5], we take a rather different approach to reduce the effect of noisy gradients.
3
Variance Reduction for Langevin Dynamics
As we have seen in Section 2, reducing the variance of our stochastic approximation can reduce
our estimation error. In this section, we introduce two approaches for variance reduction, based on
recent variance reduction algorithms for gradient descent [5, 8]. The ?rst algorithm, S AGA -L D, is
appropriate when our bottleneck is computation; it yields improved convergence with minimal additional computational costs over S GLD. The second algorithm, S VRG -L D, is appropriate when our
bottleneck is memory; while the computational cost is generally higher than S AGA -L D, the memory requirement is lower, with the memory overhead beyond that of stochastic Langevin dynamics
scales as O(d). In practice, we found that computation was a greater bottleneck in the examples
considered, so our experimental section only focuses on S AGA -L D; however on larger datasets with
easily computable gradients, S VRG -L D may be the optimal choice.
1
In particular, if h ? T ?1/3 , we obtain the optimal convergence rate for the above upper bound.
3
Algorithm 1: S AGA -L D
?1
1: Input: ?0i = ?0 ? Rd for i ? {1, . . . , N }, step sizes {ht > 0}T
i=0
2:
3:
4:
5:
6:
7:
8:
9:
10:
3.1
N
g? = i=1 ? log p(xi |?0i )
for t = 0 to T ? 1 do
Uniformly randomly pick a set It from {1, . . . , N } (with replacement) such that |It | = b
Randomly draw?t ? N (0, ht )
i
?t+1 = ?t + h2t ? log p(?t ) + N
i?It ? log p(xi |?t ) ? ? log p(xi |?t ) + g? + ?t
n
i
i
?t+1
= ?t for i ? It and ?t+1
= ?ti for i ?
/ It
i
g? = g? + i?It ? log p(xi |?t+1
) ? ? log p(xi |?ti )
end for
?1
Output: Iterates {?t }Tt=0
S AGA -L D
The increased variance in S GLD is due to the fact that we only have information from n N data
points at each iteration. However, inspired by a minibatch version of the S AGA algorithm [5], we
can include information from the remaining data points via an approximate gradient, and partially
update the average gradient in each operation. We call this approach S AGA -L D.
Under S AGA -L D, we explicitly store N approximate gradients {g?i }N
i=1 , corresponding to the N
vectors, initialized as ?0i = ?0 for all i ?
data points. Concretely, let ?t = (?ti )N
i=1 be a set of
N
[N ], and initialize g?i = ? log p(xi |?0i ) and g? =
i=1 g?i . As we iterate through the data,
if a data point is not selected in the current minibatch, we approximate its gradient with g?i . If
It = {i1t , . . . int } is the minibatch selected at iteration t, this means we approximate the gradient as
N
i=1
? log p(xi |?t ) ?
N
n
i?It
(? log p(xi |?t ) ? g?i ) + g?
(5)
When Equation (5) is used for MAP estimation it corresponds to S AGA[5]. However by injecting
noise into the parameter update in the following manner
??t = h2t ? log p(?t ) + N
i?It (? log p(xi |?t ) ? g?i ) + g? + ?t , where ?t ? N (0, ht ) (6)
n
i
we can adapt it for sampling from the posterior. After updating ?t+1 = ?t + ??t , we let ?t+1
= ?t
i
for i ? It . Note that we do not need to explicitly store the ?t ; instead we just update the corresponding gradients g?i and overall approximate gradient g? . The S AGA -L D algorithm is summarized in
Algorithm 1.
The approximation in Equation (6) gives an unbiased estimate of the true gradient, since the minibatch It is sampled uniformly at random from [N ], and the ?it are independent of It . S AGA -L D
offers two key properties: (i) As shown in Section 4, S AGA -L D has better convergence properties
than S GLD; (ii) The computational overhead is minimal, since S AGA -L D does not require explicit
calculation of the full gradient. Instead, it simply makes use of gradients that are already being
calculated in the current minibatch. Combined, we end up with a similar computational complexity
to S GLD, with a much better convergence rate.
The only downside of S AGA -L D, when compared with S GLD, is in terms of memory storage. Since
we need to store N individual gradients g?i , we typically have a storage overhead of O(N d) relative to S GLD. Fortunately, in many applications of interest to machine learning, the cost can be
reduced to O(N ) (please refer to [5] for more details), and in practice the cost of the higher memory
requirements is typically outweighed by the improved convergence and low computational cost.
3.2
S VRG -L D
If the memory overhead of S AGA -L D is not acceptable, we can use a variant that reduces storage
requirements, at the cost of higher computational demands. The memory complexity for S AGA -L D
is high because the approximate gradient g? is updated at each step. This can be avoided by updating
the approximate gradient every m iterations in a single evaluation, and never storing the individual
gradients g?i . Concretely, after every m passes through the data, we evaluate the gradient on the
4
Algorithm 2: S VRG -L D
?1
1: Input: ?? = ?0 ? Rd , epoch length m, step sizes {ht > 0}T
i=0
2: for t = 0 to T ? 1 do
3:
if (t mod m = 0) then
4:
?? = ?
t
N
?
5:
g? = i=1 ? log p(xi |?)
6:
end if
7:
Uniformly randomly pick a set It from {1, . . . , N } (with replacement) such that |It | = n
8:
Randomly draw?t ? N (0, ht )
?
9:
?t+1 = ?t + h2t ? log p(?t ) + N
? + ?t
i?It ? log p(xi |?t ) ? ? log p(xi |?) + g
n
10: end for
?1
11: Output: Iterates {?t }T
t=0
N
? is the current local gradient. g?
entire data set, obtaining g? = i=1 g?i , where g?i = ? log p(xi |?)
then serves as an approximate gradient until the next global evaluation. This yields an update of the
form
??t = h2t ? log p(?t ) + N
?i ) + g? + ?t where ?t ? N (0, ht ) (7)
i?It (? log p(xi |?t ) ? g
n
Without adding noise ?t the update sequence in Equation (7) corresponds to the stochastic variance
reduction gradient descent algorithm [8]. Pseudocode for this procedure is given in Algorithm 2.
While the memory requirements are lower, the computational cost is higher, due to the cost of a
full update of g?. Further, convergence may be negatively effected due to the fact that, as we move
? g? will be further from the true gradient. In practice, we found S AGA -L D to be a more
further from ?,
effective algorithm on the datasets considered, so in the interest of space we relegate further details
about S VRG -L D to the appendix.
4
Analysis
Our motivation in this paper was to improve the convergence of S GLD, by reducing the variance of
the gradient estimate. As we saw in Theorem 1, a high variance E[||?Vt ||2 ], corresponding to noisy
stochastic gradients, leads to a large bound on the MSE of a test function. We expand this analysis
to show that the algorithms introduced in this paper yield a tighter bound.
Theorem 1 required a number of assumptions, given below in [A1]. Discussion of the reasonableness
of these assumptions is provided in [3].
[A1] We assume the functional ? that solves the Poisson equation L? = ? ? ?? is bounded up to
3rd-order derivatives by some function ?, i.e.,
Dk ?
? Ck ?pk where D is the kth order derivative
(for k = (0, 1, 2, 3)), and Ck , pk > 0. We also assume that the expectation of ? on {?t } is bounded
(supt E?p [?t ] < ?) and that ? is smooth such that sups?(0,1) ?p (s? + (1 ? s)? ) ? C(?p (?) +
?p (? )), ??, ? , p ? max 2pk for some C > 0.
In our analysis of S AGA -L D and S VRG -L D, we make the assumptions in [A1], and add the following
further assumptions about the smoothness of our gradients:
[A2] We assume that the functions log p(xi |?) are Lipschitz smooth with constant L for all i ? [N ],
i.e.
? log p(xi |?) ? ? log p(xi |? )
? L
? ? ?
for all i ? [N ] and ?, ? ? Rd . We assume that
(?Vt ?(?))2 ? C
?Ut (?) ? ?U (?)
2 for some constant C > 0 for all ? ? Rd , where ? is the
solution to the Poisson equation for our test function. We also assume that
? log p(?)
? ? and
? log p(xi |?)
? ? for some ? and all i ? [N ] and ? ? Rd .
The Lipschitz smoothness assumption is very common both in the optimization literature [11] and
when working with It?o diffusions [3]. The bound on (?Vt ?(?))2 holds when the gradient
??
is
bounded.
Loosely, these assumptions encode the idea that the gradients don?t change too quickly, so that we
limit the errors introduced by incorporating gradients based on previous values of ?. With these
assumptions, we state the following key results for S AGA -L D and S VRG -L D, which are proved in
the supplement.
5
Theorem 2. Let ht = h for all t ? {1, . . . , T }. Under the assumptions [A1],[A2], for a smooth test
function ?, the MSE of S AGA -L D (in Algorithm 1) at time K = hT is bounded, for some C > 0
independent of (T, h) in the following manner:
2
2
2 N
2 2 2
N
min{?
,
(L
h
?
+hd)}
2
n
?2?C
+ 1 + h2 .
(8)
E(?? ? ?)
nT
Th
A similar result can be shown for S VRG -L D in Algorithm 2:
Theorem 3. Let ht = h for all t ? {1, . . . , T }. Under the assumptions [A1],[A2], for a smooth test
function ?, the MSE of S VRG -L D (in Algorithm 2) at time K = hT is bounded, for some C > 0
independent of (T, h) in the following manner:
2
2
2
2 2 2
? 2 ? C N min{? ,m (L h ? +hd)} + 1 + h2 .
E(?? ? ?)
(9)
nT
Th
The result in Theorem 3 is qualitatively equivalent to that in Theorem 2 when m = N/n. In
general, such a choice of m is preferable because, in this case, the overall cost of calculation of full
gradient in Algorithm 2 becomes insigni?cant.
In order to assess the theoretical convergence of our proposed algorithm, we compare the bounds
for S VRG -L D (Theorem 3) and S AGA -L D (Theorem 2) with those obtained for S GLD (Theorem 1.
Under the assumptions in this section, it is easy to show that the term T1 in Theorem 1 becomes
O(N 2 ? 2 /(T n)). In contrast, both Theorem 2 and 3 show that, due to a reduction in variance,
S VRG -L D and S AGA -L D exhibit a much weaker dependence. More speci?cally, this is manifested
in the form of the following bound:
N2
N 2 min ? 2 , n2 (h2 ? 2 +hd)
nT
.
Note that this is tighter than the corresponding bound on S GLD. We also note that, similar to S GLD,
S AGA -L D and S VRG -L D require h ? 0 as T ? ?. In such a scenario, the convergence becomes
signi?cantly faster relative to S GLD as h ? 0.
5
Experiments
We present our empirical results in this section. We focus on applying our stochastic gradient method
to four different machine learning tasks, carried out on benchmark datasets: (i) Bayesian linear regression (ii) Bayesian logistic regression and (iii) Independent component analysis (iv) Mixture
modeling. We focus on S AGA -L D, since in the applications considered, the convergence and computational bene?ts of S AGA -L D are more bene?cial than the memory bene?ts of S VRG -L D;
In order to reduce the initial computational costs associated with calculating the initial average
gradient, we use a variant of Algorithm 1 that calculates g? (in line 2 of Algorithm 1) in an online
fashion and reweights the updates accordingly. Note that such a heuristic is also commonly used in
the implementation of S AG and S AGA in the context of optimization [14, 5].
In all our experiments, we use a decreasing step size for S GLD as suggested by [15]. In particular,
we use
t = a(b + t)?? , where the parameters a, b and ? are chosen for each dataset to give the best
performance of the algorithm on that particular dataset. For S AGA -L D, due to the bene?t of variance
reduction, we use a simple two phase constant step size selection strategy. In each of these phases, a
constant step size is chosen such that S AGA -L D gives the best performance on the particular dataset.
The minibatch size, n, in both S GLD and S AGA -L D is held at a constant value of 10 throughout our
experiments. All algorithms are initialized to the same point and the same sequence of minibatches
is pre-generated and used in both algorithms.
5.1
Regression
We ?rst demonstrate the performance of our algorithm on Bayesian regression. Formally, we are
d
th
provided with inputs Z = {xi , yi }N
i=1 where xi ? R and yi ? R. The distribution of the i output
?1
yi is given by p(yi |xi ) = N (? xi , ?e ), where p(?) = N (0, ? I). Due to conjugacy, the posterior
distribution over ? is also normal, and the gradients of the log-likelihood and the log-prior are given
6
102
toms
103
SGLD
SAGA-LD
102
3dRoad
2
SGLD
SAGA-LD
SGLD
SAGA-LD
Test MSE
102
parkinsons
105
SGLD
SAGA-LD
Test MSE
Test MSE
Test MSE
noise
104
SGLD
SAGA-LD
Test MSE
concrete
104
101
1.5
1
10-1
10-1
0
1
2
Number of pass through data
3
10-1
0
1
3
Number of pass through data
10-1
5
0
1
5
Number of pass through data
10
0
1
5
Number of pass through data
0
10
0.5
Number of pass through data
1
SGLD
SAGA-LD
-102
1
4
Number of pass through data
8
-100
-10
1
SGLD
SAGA-LD
-102
0
2
Number of pass through data
-10
0
SGLD
SAGA-LD
-101
4
0
0.5
Number of pass through data
-10-1
space
SGLD
SAGA-LD
-100
1
susy
-0.4
1
5
Number of pass through data
Average Test log-likelihood
-10
1
eeg
Average Test log-likelihood
-100
diabetic
-10-1
Average Test log-likelihood
pima
-10-1
Average Test log-likelihood
Average Test log-likelihood
Figure 1: Performance comparison of S GLD and S AGA -L D on a regression task. The x-axis and yaxis represent the number of passes through the entire data and the average test MSE, respectively.
Additional experiments are provided in the appendix.
10
-1
SGLD
SAGA-LD
-2
0.5
1
Number of pass through data
2
Figure 2: Comparison of performance of S GLD and S AGA -L D for Bayesian logistic regression.
The x-axes and y-axes represent the number of effective passes through the dataset and the test
log-likelihood, respectively.
by ?? log(P (yi |xi , ?)) = ?(yi ? ? T xi )xi and ?? log(P (?)) = ???. We ran experiments on 11
standard UCI regression datasets, summarized in Table 1.2 In each case, we set the prior precision
? = 1, and we partitioned our dataset into training (70%), validation (10%), and test (20%) sets.
The validation set is used to select the step size parameters, and we report the mean square error
(MSE) evaluated on the test set, using 5-fold cross-validation.
The average test MSE on a subset of datasets is reported in Figure 1. Due to space constraints,
we relegate the remaining experimental results to the appendix. As shown in Figure 1, S AGA -L D
converges much faster than the S GLD method (taking less than one pass through the whole dataset
in many cases). This performance gain is consistent across all the datasets. Furthermore, the step
size selection was much simpler for S AGA -L D than S GLD.
Datasets
N
P
concrete
1030
8
noise
1503
5
parkinson
5875
21
bike
17379
12
toms
45730
96
protein
45730
9
casp
53500
9
kegg
64608
27
3droad
434874
2
music
515345
90
twitter
583250
77
Table 1: Summary of datasets used for regression.
5.2
Classi?cation
We next turn our attention to classi?cation, using Bayesian logistic regression. In this case, the
d
input is the set Z = {xi , yi }N
i=1 where xi ? R , yi ? {0, 1}. The distribution of the output yi for given sample xi is given by P (yi = 1) = ?(? T xi ), where p(?) = N (0, ??1 I) and
?(z) = 1/(1 + exp(?z)). Here, the gradient of the log-likelihood and the log-prior are given by
?? log(P (yi |xi , ?)) = (yi ? ?(? T xi ))xi and ?? log(P (?)) = ??? respectively. Again, ? is set
to 1 for all experiments, and the dataset split and parameter selection method is exactly same as in
our regression experiments. We run experiments on ?ve binary classi?cation datasets in the UCI
repository, summarized in Table 2, and report the the test set log-likelihood for each dataset, using
5-fold cross validation. Figure 2 shows the performance of S GLD and S AGA -L D for the classi?cation datasets. As we saw with the regression task, S AGA -L D converges faster that S GLD on all the
datasets, demonstrating the ef?ciency of the our algorithm in this setting.
Datasets
N
d
pima
768
8
diabetic
1151
20
eeg
14980
15
space
58000
9
susy
100000
18
Table 2: Summary of the datasets used for classi?cation.
2
The datasets can be downloaded from https://archive.ics.uci.edu/ml/index.html
7
Regression-concrete
-0.01
0
1
Number of pass through data
1.5
SGLD
SAGA-LD
103
0
2
4
Number of pass through data
posterior
15000
6
0
1
2
Number of pass through data
3
Sample count
105
SGLD
SAGA-LD
-2
106
Estimated Posterior
104
Variance
-1
Classification-pima
SGLD
SAGA-LD
log-posterior
1010
Variance
Average Test log-likelihood
MEG
1
-1
-3
-50
0
50
7000
0
-50
0
50
Figure 3: The left plot shows the performance of S GLD and S AGA -L D for the ICA task. The next
two plots show the variance of S GLD and S AGA -L D for regression and classi?cation. The rightmost
two plot shows true and estimated posteriors using S AGA -L D for the mixture modeling task
5.3
Bayesian Independent Component Analysis
To evaluate performance under a Bayesian Independent Component Analysis (ICA) model, we assume our dataset x = {xi }N
i=1 is distributed according to
d
p(x|W ) ? | det(W )| i=1 p(yi ), Wij ? N (0, ?),
(10)
where W ? Rd?d , yi = wiT x, and p(yi ) = 1/(4 cosh2 ( 12 yi )). The gradient of the log-likelihood
and the log-prior are ?W log(p(xi |W )) = (W ?1 )T ? Yi xTi where Yij = tanh( 12 yij ) for all j ? [d]
and ?W log(p(W )) = ??W respectively. All other parameters are set as before. We used a standard ICA dataset for our experiment3 , comprisein 17730 time-points with 122 channels from which
we extracted the ?rst 10 channels. Further experimental details are similar to those for regression
and classi?cation. The performance (in terms of test set log likelihood) of S GLD and S AGA -L D for
the ICA task is shown in Figure 3. As seen in Figure 3, similar to the regression and classi?cation
tasks, S AGA -L D outperforms S GLD in the ICA task.
5.4
Mixture Model
Finally, we evaluate how well S AGA -L D estimates the true posterior of parameters of mixture models. We generated 20,000 data points from a mixture of two Gaussians, given by p(x|?, ?1 , ?2 , ?) =
1
1
2
2
2 N (x; ?, ? ) + 2 N (x; ?? + ?, ? ), where ? = ?5, ? = 20, and ? = 5. We estimate the posterior
distribution over ?, holding the other variables ?xed. The two plots on the right of Figure 3 show
that we are able to estimate the true posterior correctly.
Discussion: Our experiments provide a very compelling reason to use variance reduction techniques
for S GLD, complementing the theoretical justi?cation given in Section 4. The hypothesized variance
reduction is demonstrated in Figure 3, where we compare the variances of S GLD and S AGA -L D
with respect to the true gradient on regression and classi?cation tasks. As we see from all of the
experimental results in this section, S AGA -L D converges with relatively very few samples compared
with S GLD. This is especially important in hierarchical Bayesian models where, typically, the size
of the model used is proportional to the number of observations. Thus, with S AGA -L D, we can
achieve better performance with very few samples. Another advantage is that, while we require the
step size to tend to zero, we can use a much simpler schedule than S GLD.
6
Discussion and Future Work
S AGA -L D is a new stochastic Langevin method that obtains improved convergence by reducing the
variance in the stochastic gradient. An alternative method, S VRG -L D, can be used when memory is
at a premium. For both S AGA -L D and S VRG -L D, we proved a tighter convergence bound than the
one previously shown for stochastic gradient Langevin dynamics. We also showed, on a variety of
machine learning tasks, that S AGA -L D converges to the true posterior faster than S GLD, suggesting
the widespread use of S AGA -L D in place of S GLD.
We note that, unlike other stochastic Langevin methods, our sampler is non-Markovian. Since our
convergence guarantees are based on bounding the error relative to the full Langevin diffusion rather
than on properties of a Markov chain, this does not impact the validity of our sampler.
While we showed the ef?cacy of using our proposed variance reduction technique to S GLD, our
proposed strategy is very generic enough and can also be applied to other gradient-based MCMC
techniques such as [1, 2, 9, 6, 12]. We leave this as future work.
3
The dataset can be downloaded from https://www.cis.hut.fi/projects/ica/eegmeg/
MEG_data.html.
8
References
[1] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian posterior sampling via stochastic
gradient Fisher scoring. In ICML, 2012.
[2] Sungjin Ahn, Babak Shahbaba, and Max Welling. Distributed stochastic gradient MCMC. In
ICML, 2014.
[3] Changyou Chen, Nan Ding, and Lawrence Carin. On the convergence of stochastic gradient
MCMC algorithms with high-order integrators. In NIPS, 2015.
[4] Tianqi Chen, Emily B. Fox, and Carlos Guestrin. Stochastic gradient Hamiltonian Monte
Carlo. In ICML, 2014.
[5] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient
method with support for non-strongly convex composite objectives. In NIPS, 2014.
[6] Nan Ding, Youhan Fang, Ryan Babbush, Changyou Chen, Robert D. Skeel, and Hartmut
Neven. Bayesian sampling using stochastic gradient thermostats. In NIPS, 2014.
[7] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte
Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
2011.
[8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.
[9] Yi-An Ma, Tianqi Chen, and Emily Fox. A complete recipe for stochastic gradient MCMC. In
NIPS, 2015.
[10] Radford Neal. Mcmc using hamiltonian dynamics. In Handbook of Markov Chain Monte
Carlo, 2010.
[11] Yurii Nesterov. Introductory Lectures On Convex Optimization: A Basic Course. Springer,
2003.
[12] Sam Patterson and Yee Whye Teh. Stochastic gradient Riemannian Langevin dynamics on the
probability simplex. In NIPS, 2013.
[13] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400?407, sep 1951.
[14] Mark W. Schmidt, Nicolas Le Roux, and Francis R. Bach. Minimizing ?nite sums with the
stochastic average gradient. arXiv:1309.2388, 2013.
[15] Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
9
| 6293 |@word repository:1 version:2 changyou:2 norm:1 h2t:5 pick:2 thereby:1 ld:13 reduction:14 initial:2 series:1 selecting:1 rightmost:1 outperforms:1 current:4 discretization:1 nt:3 numerical:1 cant:2 cheap:1 designed:3 plot:4 update:8 stationary:1 selected:2 item:1 complementing:1 slowing:1 accordingly:1 experiment3:1 hamiltonian:4 provides:2 iterates:2 simpler:2 casp:1 zhang:1 mathematical:1 combine:1 overhead:5 introductory:1 introduce:1 manner:4 ica:6 integrator:1 discretized:1 inspired:3 decreasing:2 riemann:1 xti:2 becomes:4 spain:1 provided:4 bounded:7 project:1 bike:1 xed:1 irom:1 ag:1 guarantee:3 cial:1 every:2 act:1 ti:4 preferable:1 exactly:1 cosh2:1 t1:4 negligible:1 before:1 local:1 limit:1 sutton:1 path:1 cacy:1 limited:1 practice:3 procedure:1 nite:1 empirical:6 composite:1 ups:1 pre:1 protein:1 selection:3 operator:1 storage:3 context:1 applying:1 yee:2 www:1 equivalent:2 map:1 demonstrated:1 attention:1 emily:2 convex:2 wit:1 roux:1 estimator:2 dominate:1 fang:1 dw:1 hd:3 variation:1 updated:2 annals:1 us:1 pa:1 expensive:1 updating:2 ding:2 ensures:1 decrease:3 thermore:1 ran:1 complexity:2 nesterov:1 dynamic:18 babak:1 mccombs:1 predictive:1 calderhead:1 negatively:1 patterson:1 eric:1 easily:2 sep:1 tx:1 fast:1 effective:2 monte:13 outside:1 heuristic:1 larger:2 valued:1 statistic:2 noisy:6 online:1 sequence:3 advantage:1 propose:2 uci:3 korattikara:1 mixing:3 achieve:1 gold:1 recipe:1 rst:5 convergence:22 requirement:4 plethora:1 incremental:1 converges:4 leave:1 sjakkamr:1 tianqi:2 ben:1 solves:1 strong:1 c:1 signi:2 come:4 girolami:1 correct:1 stochastic:51 exploration:2 require:4 barnab:1 alleviate:1 preliminary:1 tighter:4 ryan:1 yij:2 exploring:1 hold:1 hut:1 considered:3 ic:1 aga:48 exp:2 normal:1 sgld:13 lawrence:1 a2:3 estimation:2 injecting:1 applicable:1 tanh:1 utexas:1 bridge:1 robbins:3 saw:2 tool:1 clearly:2 supt:1 aim:1 rather:3 ck:2 parkinson:2 encode:1 derived:1 focus:4 ax:2 fur:1 likelihood:15 contrast:1 inference:2 twitter:1 neven:1 typically:3 entire:2 expand:1 wij:1 interested:1 issue:1 overall:2 html:2 classification:1 proposes:2 initialize:1 equal:1 never:1 sampling:5 represents:2 look:1 icml:4 carin:1 future:2 simplex:2 report:2 inherent:1 few:2 randomly:4 modi:1 ve:1 individual:2 phase:2 replacement:2 interest:3 evaluation:6 mixture:7 yielding:1 held:1 chain:3 accurate:1 traversing:1 fox:2 iv:1 loosely:1 initialized:2 theoretical:7 minimal:3 increased:2 modeling:4 compelling:2 markovian:3 downside:1 cost:17 deviation:1 subset:7 johnson:1 too:1 reported:1 gd:3 combined:1 cantly:1 avinava:1 quickly:1 concrete:3 again:1 reweights:1 derivative:3 leading:1 suggesting:1 potential:1 summarized:3 stabilize:1 int:1 explicitly:2 depends:2 try:1 shahbaba:1 sup:1 francis:2 xing:1 effected:1 carlos:1 simon:1 monro:3 contribution:4 ass:1 square:1 variance:40 yield:3 outweighed:1 bayesian:12 none:1 carlo:13 xtn:1 cation:13 energy:1 associated:1 riemannian:1 sampled:2 gain:1 dataset:12 proved:2 popular:1 sinead:2 knowledge:1 ut:4 schedule:2 carefully:2 higher:4 dt:1 tom:2 methodology:1 improved:3 rie:1 evaluated:2 strongly:1 furthermore:1 just:1 smola:1 until:1 working:2 hastings:1 replacing:1 minibatch:6 widespread:1 logistic:3 quality:1 facilitate:1 effect:4 hypothesized:1 validity:1 true:12 unbiased:5 inspiration:1 neal:1 please:1 unnormalized:1 whye:2 tt:1 demonstrate:2 complete:1 meaning:1 novel:1 ef:5 recently:1 fi:1 common:1 pseudocode:1 functional:1 mellon:1 refer:2 smoothness:2 rd:7 consistency:1 impressive:2 ahn:2 add:1 posterior:20 recent:2 showed:2 scenario:1 store:3 certain:1 susy:2 manifested:1 oczos:1 binary:1 success:2 youhan:1 vt:8 yi:18 scoring:1 seen:2 guestrin:1 additional:4 greater:1 fortunately:1 herbert:1 employed:1 speci:1 ii:2 full:4 reduces:1 smooth:5 faster:6 adapt:1 calculation:2 vrg:15 long:1 offer:1 cross:2 bach:2 a1:6 calculates:1 impact:1 variant:5 regression:18 basic:2 scalable:1 cmu:1 poisson:3 expectation:1 arxiv:1 iteration:6 represent:2 source:1 unlike:1 archive:1 pass:3 tend:1 incorporates:1 mod:1 reddi:1 ciently:1 call:1 iii:1 easy:2 split:1 enough:1 variety:3 iterate:1 uctuations:1 reduce:8 idea:2 computable:1 texas:1 det:1 bottleneck:3 defazio:1 accelerating:1 effort:1 sashank:1 useful:1 generally:1 dubey:1 reduced:1 http:2 estimated:2 per:1 correctly:1 carnegie:1 key:3 four:2 demonstrating:1 ht:18 diffusion:5 lacoste:1 sum:1 run:1 place:1 throughout:1 reasonable:2 draw:3 acceptable:1 scaling:1 appendix:3 bound:9 nan:2 fold:2 constraint:1 alex:1 generates:1 speed:2 min:3 relatively:1 department:1 according:2 poor:1 across:1 sam:1 partitioned:1 metropolis:1 hartmut:1 kegg:1 bapoczos:1 computationally:1 equation:10 conjugacy:1 previously:2 remains:1 turn:2 count:1 end:4 serf:1 yurii:1 operation:1 gaussians:1 hierarchical:1 appropriate:3 generic:1 undermining:1 batch:1 alternative:1 schmidt:1 slower:2 original:1 denotes:1 remaining:2 include:1 maintaining:1 cally:1 calculating:1 music:1 especially:1 classical:1 society:1 move:2 objective:1 already:1 degrades:1 strategy:2 dependence:1 exhibit:1 gradient:86 detrimental:1 kth:1 manifold:1 reason:1 meg:1 length:1 modeled:1 index:1 mini:1 providing:1 minimizing:1 robert:1 pima:3 holding:1 negative:1 design:1 implementation:1 motivates:1 teh:2 conversion:1 upper:1 observation:1 datasets:18 markov:3 benchmark:1 descent:5 t:2 langevin:23 community:3 introduced:2 complement:1 required:1 bene:5 gld:40 barcelona:1 nip:7 beyond:1 suggested:1 able:1 usually:1 below:1 regime:1 max:4 memory:11 royal:1 rely:2 improve:3 epxing:1 julien:1 axis:1 carried:1 prior:5 literature:2 epoch:1 asymptotic:2 relative:4 lecture:1 proportional:1 versus:1 generator:2 validation:4 h2:4 downloaded:2 proxy:1 consistent:1 storing:1 austin:2 course:1 summary:2 drastically:1 allow:2 weaker:1 taking:1 distributed:2 calculated:2 skeel:1 world:1 evaluating:2 contour:1 concretely:3 qualitatively:1 commonly:1 sungjin:2 avoided:1 welling:3 approximate:10 obtains:1 yaxis:1 ml:1 global:1 handbook:1 pittsburgh:1 xt1:1 xi:37 reasonableness:1 don:1 continuous:1 diabetic:2 table:4 channel:2 nicolas:1 obtaining:1 eeg:2 williamson:2 mse:13 pk:3 main:1 motivation:2 noise:5 whole:2 bounding:1 n2:2 repeated:1 cient:2 fashion:2 slow:1 tong:1 precision:1 momentum:1 explicit:1 saga:14 ciency:1 justi:1 theorem:12 xt:1 showing:1 dk:1 cease:1 incorporating:2 thermostat:1 adding:1 ci:1 supplement:1 babbush:1 demand:1 gap:1 chen:4 simply:1 explore:1 relegate:2 partially:1 radford:1 springer:1 corresponds:2 complemented:1 minibatches:2 extracted:1 ma:1 replace:1 lipschitz:2 feasible:1 considerable:1 change:1 fisher:1 reducing:7 uniformly:3 sampler:2 classi:11 pas:14 experimental:4 premium:1 aaron:1 formally:1 select:1 i1t:1 support:1 mark:2 arises:1 alexander:1 anoop:1 incorporate:1 evaluate:3 mcmc:5 correlated:1 |
5,851 | 6,294 | Safe Policy Improvement by Minimizing Robust
Baseline Regret
Marek Petrik
University of New Hampshire
[email protected]
Mohammad Ghavamzadeh
Adobe Research & INRIA Lille
[email protected]
Yinlam Chow
Stanford University
[email protected]
Abstract
An important problem in sequential decision-making under uncertainty is to use
limited data to compute a safe policy, which is guaranteed to outperform a given
baseline strategy. In this paper, we develop and analyze a new model-based
approach that computes a safe policy, given an inaccurate model of the system?s
dynamics and guarantees on the accuracy of this model. The new robust method
uses this model to directly minimize the (negative) regret w.r.t. the baseline policy.
Contrary to existing approaches, minimizing the regret allows one to improve
the baseline policy in states with accurate dynamics and to seamlessly fall back
to the baseline policy, otherwise. We show that our formulation is NP-hard and
propose a simple approximate algorithm. Our empirical results on several domains
further show that even the simple approximate algorithm can outperform standard
approaches.
1
Introduction
Many problems in science and engineering can be formulated as a sequential decision-making
problem under uncertainty. A common scenario in such problems that occurs in many different fields,
such as online marketing, inventory control, health informatics, and computational finance, is to find
a good or an optimal strategy/policy, given a batch of data generated by the current strategy of the
company (hospital, investor). Although there are many techniques to find a good policy given a batch
of data, only a few of them guarantee that the obtained policy will perform well, when it is deployed.
Since deploying an untested policy can be risky for the business, the product (hospital, investment)
manager does not usually allow it to happen, unless we provide her/him with some performance
guarantees of the obtained strategy, in comparison to the baseline policy (for example the policy that
is currently in use).
In this paper, we focus on the model-based approach to this fundamental problem in the context
of infinite-horizon discounted Markov decision processes (MDPs). In this approach, we use the
batch of data and build a model or a simulator that approximates the true behavior of the dynamical
system, together with an error function that captures the accuracy of the model at each state of the
system. Our goal is to compute a safe policy, i.e., a policy that is guaranteed to perform at least
as well as the baseline strategy, using the simulator and error function. Most of the work on this
topic has been in the model-free setting, where safe policies are computed directly from the batch of
data, without building an explicit model of the system [Thomas et al., 2015b,a]. Another class of
model-free algorithms are those that use a batch of data generated by the current policy and return a
policy that is guaranteed to perform better. They optimize for the policy by repeating this process
until convergence [Kakade and Langford, 2002; Pirotta et al., 2013].
A major limitation of the existing methods for computing safe policies is that they either adopt a
newly learned policy with provable improvements or do not make any improvement at all by returning
the baseline policy. These approaches may be quite limiting when model uncertainties are not uniform
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
across the state space. In such cases, it is desirable to guarantee an improvement over the baseline
policy by combining it with a learned policy on a state-by-state basis. In other words, we want to use
the learned policy at the states in which either the improvement is significant or the model uncertainty
(error function) is small, and to use the baseline policy everywhere else. However, computing a
learned policy that can be effectively combined with a baseline policy is non-trivial due to the complex
effects of policy changes in an MDP. Our key insight is that this goal can be achieved by minimizing
the (negative) robust regret w.r.t. the baseline policy. This unifies the sources of uncertainties in the
learned and baseline policies and allows a more systematic performance comparison. Note that our
approach differs significantly from the standard one, which compares a pessimistic performance
estimate of the learned policy with an optimistic estimate of the baseline strategy. That may result in
rejecting a learned policy with a performance (slightly) better than the baseline, simply due to the
discrepancy between the pessimistic and optimistic evaluations.
The model-based approach of this paper builds on robust Markov decision processes [Iyengar, 2005;
Wiesemann et al., 2013; Ahmed and Varakantham, 2013]. The main difference is the availability
of the baseline policy that creates unique challenges for sequential optimization. To the best of
our knowledge, such challenges have not yet been fully investigated in the literature. A possible
solution is to solve the robust formulation of the problem and then accept the resulted policy only
if its conservative performance estimate is better than the baseline. While a similar idea has been
investigated in the model-free setting (e.g., [Thomas et al., 2015a]), we show in this paper that it can
be overly conservative.
As the main contribution of the paper, we propose and analyze a new robust optimization formulation
that captures the above intuition of minimizing robust regret w.r.t. the baseline policy. After a
preliminary discussion in Section 2, we formally describe our model and analyze its main properties
in Section 3. We show that in solving this optimization problem, we may have to go beyond the
standard space of deterministic policies and search in the space of randomized policies; we derive a
bound on the performance loss of its solutions; and we prove that solving this problem is NP-hard.
We also propose a simple and practical approximate algorithm. Then, in Section 4, we show that
the standard model-based approach is really a tractable approximation of robust baseline regret
minimization. Finally, our experimental results in Section 5 indicate that even the simple approximate
algorithm significantly outperforms the standard model-based approach when the model is uncertain.
2
Preliminaries
We consider problems in which the agent?s interaction with the environment is modeled as an infinitehorizon ?-discounted MDP. A ?-discounted MDP is a tuple M = hX , A, r, P, p0 , ?i, where X and
A are the state and action spaces, r(x, a) ? [?Rmax , Rmax ] is the bounded reward function, P (?|x, a)
is the transition probability function, p0 (?) is the initial state distribution, and ? ? (0, 1] is a discount
factor. We use ?R = {? : X ? ?A } and ?D = {? : X ? A} to denote the sets of randomized
and deterministic stationary Markovian policies, respectively, where ?A is the set of probability
distributions over the action space A.
Throughout the paper, we assume that the true reward r of the MDP is known, but the true transition
probability is not given. The generalization to include reward estimation is straightforward and is
omitted for the sake of brevity. We use historical data to build a MDP model with the transition
probability denoted by Pb. Due to limited number of samples and other modeling issues, it is unlikely
that Pb matches the true transition probability of the system P ? . We also require that the estimated
model Pb deviates from the true transition probability P ? as stated in the following assumption:
Assumption 1. For each (x, a) ? X ? A, the error function e(x, a) bounds the `1 difference between
the estimated transition probability and true transition probability, i.e.,
kP ? (?|x, a) ? Pb(?|x, a)k1 ? e(x, a).
(1)
The error function e can be derived either directly from samples using high probability concentration
bounds, as we briefly outline in Appendix A, or based on specific domain properties.
To model the uncertainty in the transition probability, we adopt the notion of robust MDP
(RMDP) [Iyengar, 2005; Nilim and El Ghaoui, 2005; Wiesemann et al., 2013], i.e., an extension of
2
MDP in which nature adversarially chooses the transitions from a given uncertainty set
n
o
?(Pb, e) = ? : X ? A ? ?X : k?(?|x, a) ? Pb(?|x, a)k1 ? e(x, a), ?x, a ? X ? A .
From Assumption 1, we notice that the true transition probability is in the set of uncertain transition probabilities, i.e., P ? ? ?(Pb, e). The above `1 constraint is common in the RMDP literature (e.g., [Iyengar, 2005; Wiesemann et al., 2013; Petrik and Subramanian, 2014]). The uncertainty
set ? in RMDP is (x, a)-rectangular and randomized [Le Tallec, 2007; Wiesemann et al., 2013].
One of the motivations for considering (x, a)-rectangular sets in RMDP is that they lead to tractable
solutions in the conventional reward maximization setting. However, in the robust regret minimization
problem that we propose in this paper, even if we assume that the uncertainty set is (x, a)-rectangular,
it does not guarantee tractability of the solution. While it is of great interest to investigate the structure
of uncertainty sets that lead to tractable algorithms in robust regret minimization, it is beyond the
main scope of this paper and we leave it as future work.
For each policy ? ? ?R and nature?s choice ? ? ?, the discounted return is defined as
"T ?1
#
X
t
?
?(?, ?) = lim E?
? r Xt , At | X0 ? p0 , At ? ?(Xt ) = p>
0 v? ,
T ??
t=0
where Xt and At are the state and action random variables at time t, and v?? is the corresponding
value function. An optimal policy for a given ? is defined as ??? ? arg max???R ?(?, ?). Similarly,
under the true transition probability P ? , the true return of a policy ? and a truly optimal policy are
defined as ?(?, P ? ) and ? ? ? arg max???R ?(?, P ? ), respectively. Although we define the optimal
policy using arg max???R , it is known that every reward maximization problem in MDPs has at
least one optimal policy in ?D .
Finally, given a deterministic baseline policy ?B , we call a policy ? safe, if its "true" performance is
guaranteed to be no worse than that of the baseline policy, i.e., ?(?, P ? ) ? ?(?B , P ? ).
3
Robust Policy Improvement Model
In this section, we introduce and analyze an optimization procedure that robustly improves over a
given baseline policy ?B . As described above, the main idea is to find a policy that is guaranteed to
be an improvement for any realization of the uncertain model parameters. The following definition
formalizes this intuition.
Definition 2 (The Robust Policy Improvement Problem). Given a model uncertainty set ?(Pb, e)
and a baseline policy ?B , find a maximal ? ? 0 such that there exists a policy ? ? ?R for which
?(?, ?) ? ?(?B , ?) + ?, for every ? ? ?(Pb, e).1
The problem posed in Definition 2 readily translates to the following optimization problem:
?S ? arg max min ?(?, ?) ? ?(?B , ?) .
???R ???
(2)
Note that since the baseline policy ?B achieves value 0 in (2), ? in Definition 2 is always non-negative.
Therefore, any solution ?S of (2) is safe, because under the true transition probability P ? ? ?(Pb, e),
we have the guarantee that
?(?, P ? ) ? ?(?B , P ? ) ? min ?(?, ?) ? ?(?B , ?) ? 0 .
???
It is important to highlight how Definition 2 differs from the standard approach (e.g., [Thomas et
al., 2015a]) on determining whether a policy ? is an improvement over the baseline policy ?B . The
standard approach considers a statistical error bound that translates to the test: min??? ?(?, ?) ?
max??? ?(?B , ?). The uncertainty parameters ? on both sides of (2) are not necessarily the same.
Therefore, any optimization procedure derived based on this test is more conservative than the
problem in (2). Indeed when the error function in ? is large, even the baseline policy (? = ?B )
1
From now on, for brevity, we omit the parameters Pb and e, and use ? to denote the model uncertainty set.
3
a11 2
?1
x11
a12 3
a1
?B
?2
x1
a2
1
start
??
0
x0
??
2
a1
x1
a2
1
+10/?
??
a1
?B
?1
?10/?
Figure 1: (left) A robust/uncertain MDP used in Example 4 that illustrates the sub-optimality of
deterministic policies in solving the optimization problem (2). (right) A Markov decision process
with significant uncertainty in the baseline policy.
may not pass this test. In Section 5.1, we show the conditions under which this approach fails. Our
approach also differs from other related work in that we consider regret with respect to the baseline
policy, and not the optimal policy, as considered in [Xu and Mannor, 2009].
In the remainder of this section, we highlight some major properties of the optimization problem (2).
Specifically, we show that its solution policy may be purely randomized, we compute a bound on the
performance loss of its solution policy w.r.t. ? ? , and we finally prove that it is a NP-hard problem.
3.1
Policy Class
The following theorem shows that we should search for the solutions of the optimization problem (2)
in the space of randomized policies ?R .
Theorem 3. The optimal solution to the optimization problem (2) may not be attained by a deterministic policy. Moreover, the loss due to considering deterministic policies cannot be bounded, i.e., there
exists no constant c ? R such that
max min ?(?, ?) ? ?(?B , ?) ? c ? max min ?(?, ?) ? ?(?B , ?) .
???R ???
???D ???
Proof. The proof follows directly from Example 4. The optimal policy in this example is randomized
and achieves a guaranteed improvement ? = 1/2. There is no deterministic policy that guarantees a
positive improvement over the baseline policy, which proves the second part of the theorem.
Example 4. Consider the robust/uncertain MDP on the left panel of Figure 1 with states {x1 , x11 } ?
X , actions A = {a1 , a2 , a11 , a12 }, and discount factor ? = 1. Actions a1 and a2 are shown as solid
black nodes. A number with no state represents a terminal state with the corresponding reward.
The robust outcomes {?1 , ?2 } correspond to the uncertainty set of transition probabilities ?. The
baseline policy ?B is deterministic and is denoted by double edges. It can be readily seen from
the monotonicity of the Bellman operator that any improved policy ? will satisfy ?(a12 |x11 ) = 1.
Therefore, we will only focus on the policy at state x1 . The robust improvement as a function of
?(?|x1 ) and the uncertainties {?1 , ?2 } is given as follows:
"
#
!
? \ ? ?1 ?2
? \ ? ?1 ?2
a1
3 1 ?
min ?(?, ?) ? ?(?B , ?) = min
= 0.
a1
2 1
???
???
a2
2 2
This shows that no deterministic policy can achieve a positive improvement in this problem. However,
a randomized policy ?(a1 |x1 ) = ?(a2 |x1 ) = 1/2 returns the maximum improvement ? = 1/2.
Randomized policies can do better than their deterministic counterparts, because they allow for
hedging among various realizations of the MDP parameters. Example 4 shows a problem such that
there exists a realization of the parameters with improvement over the baseline when any deterministic
policy is executed. However in this example, there is no single realization of parameters that provides
an improvement for all the deterministic policies simultaneously. Therefore, randomizing the policy
guarantees an improvement independent of the parameters? choice.
4
3.2
Performance Bound
Generally, one cannot compute the truly optimal policy ? ? using an imprecise model. Nevertheless, it
is still crucial to understand how errors in the model translates to a performance loss w.r.t. an optimal
policy. The following theorem (proved in Appendix C) provides a bound on the performance loss of
any solution ?S to the optimization problem (2).
Theorem 5. A solution ?S to the optimization problem (2) is safe and its performance loss is bounded
by the following inequality:
2?Rmax
?
?
?
?
?(?S ) = ?(? ? , P ? ) ? ?(?S , P ? ) ? min
ke
k
+ke
k
,
?(?
)
,
? 1,u??
?B 1,u?
B
B
(1 ? ?)2
where u??? and u??B are the state occupancy distributions of the optimal and baseline policies in the
true MDP P ? . Furthermore, the above bound is tight.
3.3
Computational Complexity
In this section, we analyze the computational complexity of solving the optimization problem (2)
and prove that the problem is NP-hard. In particular, we proceed by showing that the following
sub-problem of (2):
arg min ?(?, ?) ? ?(?B , ?) ,
(3)
???
for a fixed ? ? ?R , is NP-hard. The optimization problem (3) can be interpreted as computing a
policy that simultaneously minimizes the returns of two MDPs, whose transitions induced by policies
? and ?B . The proof of Theorem 6 is given in Appendix D.
Theorem 6. Both optimization problems (2) and (3) are NP-hard.
Although the optimization problem (2) is NP-hard in general, but it can be tractable in certain settings.
One such setting is when the Markov chain induced by the baseline policy is known precisely, as the
following proposition states. See Appendix E for the proof.
Proposition 7. Assume
that for each x ? X , the error function induced by the baseline policy is
zero, i.e., e x, ?B (x) = 0.2 Then, the optimization problem (2) is equivalent to the following robust
MDP (RMDP) problem and can be solved in polynomial time:
arg max min ?(?, ?).
???R ???
3.4
(4)
Approximate Algorithm
Solving for the optimal solution of (2) may not be possible in practice, since the problem is NP hard.
In this section, we propose a simple and practical approximate algorithm. The empirical results
of Section 5 indicate that this algorithm holds promise and also suggest that the approach may be a
good starting point for building better approximate algorithms in the future.
Algorithm 1: Approximate Robust Baseline Regret Minimization Algorithm
input :Empirical transition probabilities: Pb, baseline policy ?B , and the error function e
output :Policy ?
?S
1 foreach x ? X , a ? A do
e(x, a) when ?B (x) 6= a
2
e?(x, a) ?
;
0
otherwise
3 end
4 ?
?S ? arg max???R min???(Pb,?e) ? ?, ? ? ? ?B , ? ;
5 return ?
?S
Algorithm 1 contains the pseudocode of the proposed approximate method. The main idea is to
use a modified uncertainty model by assuming no error in transition probabilities of the baseline
2
Note that this is equivalent to precisely knowing the Markov chain induced by the baseline policy P??B .
5
policy. Then it is possible to minimize the robust baseline regret in polynomial time as suggested
by Theorem 7. Assuming no error in baseline transition probabilities is reasonable because of
two main reasons. First, in practice, data is often generated by executing the baseline policy, and
thus, we may have enough
of the baseline?s transition probabilities:
data for a good approximation
?x ? X , Pb ? |x, ?B (x) ? P ? ? |x, ?B (x) . Second, transition probabilities often affect baseline
and improved policies similarly, and as a result, have little effect on the difference between their
returns (i.e., the regret). See Section 5.1 for an example of such behavior.
4
Standard Policy Improvement Methods
In Section 3, we showed that finding an exact solution to the optimization problem (2) is computationally expensive and proposed an approximate algorithm. In this section, we describe and analyze
two standard methods for computing safe policies and show how they can be interpreted as an
approximation of our proposed baseline regret minimization. Due to space limitations, we describe
another method, called reward-adjusted MDP, in Appendix H, but report its performance in Section 5.
4.1
Solving the Simulator
The simplest solution to (2) is to assume that our simulator is accurate and to solve the reward maximization problem of an MDP with the transition probability Pb, i.e., ?sim ? arg max???R ?(?, Pb).
Theorem 8 quantifies the performance loss of the resulted policy ?sim .
Theorem 8. Let ?sim be an optimal policy of the reward maximization problem of an MDP with
transition probability Pb. Then under Assumption 1, the performance loss of ?sim is bounded by
?
?(?sim ) = ?(? ? , P ? ) ? ?(?sim , P ? ) ?
2?Rmax
kek? .
(1 ? ?)2
The proof is available in Appendix F. Note that there is no guarantee that ?sim is safe, and thus,
deploying it may lead to undesirable outcomes due to model uncertainties. Moreover, the performance
guarantee of ?sim , reported in Theorem 8, is weaker than that in Theorem 5 due to the L? norm.
4.2
Solving Robust MDP
Another standard solution to the problem in (2) is based on solving the RMDP problem (4). We
prove that the policy returned by this algorithm is safe and has better (sharper) worst-case guarantees
than the simulator-based policy ?sim . Details of this algorithm are summarized in Algorithm 2. The
algorithm first constructs and solves an RMDP. It then returns the solution policy if its worst-case
performance over the uncertainty set is better than the robust performance max??? ?(?B , ?), and it
returns the baseline policy ?B , otherwise.
Algorithm 2: RMDP-based Algorithm
input :Simulated MDP Pb, baseline policy ?B , and the error function e
output :Policy ?R
1 ?0 ? arg max???R min???(P
b,e) ? ?, ? ;
2 if min???(P
b,e) ? ?0 , ? > max??? ?(?B , ?) then return ?0 else return ?B ;
Algorithm 2 makes use of the following approximation to the solution of (2):
max min ?(?, ?) ? ?(?B , ?) ? max min ?(?, ?) ? max ?(?B , ?),
???R ???
???R ???
???
and guarantees safety by designing ? such that the RHS of this inequality is always non-negative.
The performance bound of ?R is identical to that in Theorem 5 and is stated and proved in Theorem 12
in Appendix G. Although the worst-case bounds are the same, we show in Section 5.1 that the
performance loss of ?R may be worse than that of ?S by an arbitrarily large margin.
6
It is important to discuss the difference between Algorithms 1 and 2. Although both solve an RMDP,
they use different uncertainty sets ?. The uncertainty set used in Algorithm 2 is the true error function
in building the simulator, while the uncertainty set used in Algorithm 1 assumes that the error function
is zero for all the actions suggested by the baseline policy. As a result, both algorithms approximately
solve (2) but approximate the problem in different ways.
5
Experimental Evaluation
In this section, we experimentally evaluate the benefits of minimizing the robust baseline regret. First,
we demonstrate that solving the problem in (2) may outperform the regular robust formulation by an
arbitrarily large margin. Then, in the remainder of the section, we compare the solution quality of
Algorithm 1 with simpler methods in more complex and realistic experimental domains. The purpose
of our experiments is to show how solution quality depends on the degree of model uncertainties.
5.1
An Illustrative Example
Consider the example depicted on the right panel of Figure 1. White nodes represent states and black
nodes represent state-action pairs. Labels on the edges originated from states indicate the policy
according to which the action is taken; labels on the edges originated from actions denote the rewards
and, if necessary, the name of the uncertainty realization. The baseline policy is ?B , the optimal
policy is ? ? , and the discount factor is ? ? (0, 1).
This example represents a setting in which the level of uncertainty varies significantly across the
individual states: the transition model is precise in state x0 and uncertain in state x1 . The baseline
policy ?B takes a suboptimal action in state x0 and the optimal action in the uncertain state x1 . To
prevent being overly conservative in computing a safe policy, one needs to consider that the realization
of uncertainty in x1 influences both the baseline and improved policies.
Using the plain robust optimization formulation in Algorithm 2, even the optimal policy ? ? is not
considered safe in this example. In particular, the robust return of ? ? is min? ?(? ? , ?) = ?9, while
the optimistic return of ?B is max? ?(?B , ?) = +10. On the other hand, solving (2) will return the
optimal policy since: min? ?(? ? , ?) ? ?(?B , ?) = 11 ? 10 = ?9 ? (?10) = 1. Even the heuristic
method of Section 3.4 will return the optimal policy. Note that since the reward-adjusted formulation
(see its description in Appendix H) is even more conservative than the robust formulation, it will also
fail to improve on the baseline policy.
5.2
Grid Problem
In this section, we use a simple grid problem to compare the solution quality of Algorithm 1 with
simpler methods. The grid problem is motivated by modeling customer interactions with an online
system. States in the problem represent a two dimensional grid. Columns capture states of interaction
with the website and rows capture customer states such as overall satisfaction. Actions can move
customers along either dimension with some probability of failure. A more detailed description of
this domain is provided in Section I.1.
Our goal is to evaluate how the solution quality of various methods depends on the magnitude of the
model error e. The model is constructed from samples, and thus, its magnitude of error depends on
the number of samples used to build it. We use a uniform random policy to gather samples. Model
error function e is then constructed from this simulated data using bounds in Section B. The baseline
policy is constructed to be optimal when ignoring the row part of state; see Section I.1 for more
details.
All methods are compared in terms of the improvement percentage in total return over the baseline
policy. Figure 2 depicts the results as a function of the number of transition samples used in
constructing the uncertain model and represents the mean of 40 runs. Methods used in the comparison
are as follows: 1) EXP represents solving the nominal model as described in Section 4.1, 2) RWA
represent the reward-adjusted formulation in Algorithm 3 of Appendix H, 3) ROB represents the
robust method in Algorithm 2, and 4) RBC represents our approximate solution of Algorithm 1.
Figure 2 shows that Algorithm 1 not only reliably computes policies that are safe, but also significantly
improves on the quality of the baseline policy when the model error is large. When the number of
7
0.15
EXP
RWA
ROB
RBC
30
20
Improvement over baseline (%)
Improvement over Baseline (%)
40
10
0
?10
?20
0
500
1000
1500
2000
Number of Samples
2500
0.10
0.05
0.00
?0.05
?0.10
?0.15
EXP
ROB
RBC
?0.20
1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
Number of samples
3000
Figure 2: Improvement in return over the baseline policy in: (left) the grid problem and (right) the
energy arbitrage problem. The dashed line shows the return of the optimal policy.
samples is small, Algorithm 1 is significantly better than other methods by relying on the baseline
policy in states with a large model error and only taking improving actions when the model error is
small. Note that EXP can be significantly worse than the baseline policy, especially when the number
of samples is small.
5.3
Energy Arbitrage
In this section, we compare model-based policy improvement methods using a more complex domain.
The problem is to determine an energy arbitrage policy in given limited energy storage (a battery)
and stochastic prices. At each time period, the decision-maker observes the available battery charge
and a Markov state of energy price, and decides on the amount of energy to purchase or to sell.
The set of states in the energy arbitrage problem consists of three components: current state of charge,
current capacity, and a Markov state representing price; the actions represent the amount of energy
purchased or sold; the rewards indicate profit/loss in the transactions. We discretize the state of
charge and action sets to 10 separate levels. The problem is based on the domain from [Petrik and
Wu, 2015], whose description is detailed in Appendix I.2.
Energy arbitrage is a good fit for model-based approaches because it combines known and unknown
dynamics. Physics of battery charging and discharging can be modeled with high confidence, while
the evolution of energy prices is uncertain. As a result, using an explicit battery model, the only
uncertainty is in transition probabilities between the 10 states of the price process instead of the entire
1000 state-action pairs. This significantly reduces the number of samples needed.
As in the previous experiments, we estimate the uncertainty model in a data-driven manner. Notice
that the inherent uncertainty is only in price transitions and is independent of the policy used (which
controls the storage dynamics). Here the uncertainty set of transition probabilities is estimated using
the method in Appendix A, but the uncertainty set is only a non-singleton w.r.t. price states. Figure 2
shows the percentage improvement on the baseline policy averaged over 5 runs. We clearly observe
that the heuristic RBC method, described in Section 3.4, effectively interleaves the baseline policy (in
states with high level of uncertainty) and an improved policy (in states with low level of uncertainty),
and results in the best performance in most cases. Solving a robust MDP with no baseline policy
performed similarly to directly solving the simulator.
6
Conclusion
In this paper, we study the model-based approach to the fundamental problem of learning safe
policies given a batch of data. A policy is considered safe, if it is guaranteed to have an improved
performance over a baseline policy. Solving the problem of safety in sequential decision-making can
immensely increase the applicability of the existing technology to real-world problems. We show
that the standard robust formulation may be overly conservative and formulate a better approach
that interleaves an improved policy with the baseline policy, based on the error at each state. We
propose and analyze an optimization problem based on this idea (see (2)) and prove that solving it is
NP-hard. Furthermore, we propose several approximate solutions and experimentally evaluated their
performance.
8
References
A. Ahmed and P Varakantham. Regret based Robust Solutions for Uncertain Markov Decision
Processes. Advances in neural information processing systems, pages 1?9, 2013.
T. Hansen, P. Miltersen, and U. Zwick. Strategy iteration is strongly polynomial for 2-player
turn-based stochastic games with a constant discount factor. Journal of the ACM, 60(1):1?16,
2013.
G. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257?280,
2005.
S. Kakade and J. Langford. Approximately optimal approximate reinforcement learning. In Proceedings of the 19th International Conference on Machine Learning, pages 267?274, 2002.
Y. Le Tallec. Robust, Risk-Sensitive, and Data-driven Control of Markov Decision Processes. PhD
thesis, MIT, 2007.
A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition
matrices. Operations Research, 53(5):780?798, 2005.
M. Petrik and D. Subramanian. RAAM : The benefits of robustness in approximating aggregated
MDPs in reinforcement learning. In Neural Information Processing Systems, 2014.
M. Petrik and X. Wu. Optimal Threshold Control for Energy Arbitrage with Degradable Battery
Storage. In Uncertainty in Artificial Intelligence, pages 692?701, 2015.
M. Pirotta, M. Restelli, and D. Calandriello. Safe Policy Iteration. In Proceedings of the 30th
International Conference on Machine Learning, 2013.
P. Thomas, G. Teocharous, and M. Ghavamzadeh. High Confidence Policy Improvement. In
International Conference on Machine Learning, pages 2380?2388, 2015.
P. Thomas, G. Theocharous, and M. Ghavamzadeh. High confidence off-policy evaluation. In
Proceedings of the Twenty-Ninth Conference on Artificial Intelligence, 2015.
T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M. Weinberger. Inequalities for the L1
deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003.
W. Wiesemann, D. Kuhn, and B. Rustem. Robust Markov decision processes. Mathematics of
Operations Research, 38(1):153?183, 2013.
H. Xu and S. Mannor. Parametric regret in uncertain Markov decision processes. Proceedings of the
IEEE Conference on Decision and Control, pages 3606?3613, 2009.
9
| 6294 |@word briefly:1 polynomial:3 norm:1 p0:3 profit:1 solid:1 initial:1 contains:1 outperforms:1 existing:3 current:4 com:1 yet:1 readily:2 realistic:1 happen:1 stationary:1 intelligence:2 website:1 provides:2 mannor:2 node:3 simpler:2 along:1 constructed:3 prove:5 consists:1 combine:1 manner:1 introduce:1 x0:4 indeed:1 behavior:2 simulator:7 manager:1 terminal:1 bellman:1 discounted:4 relying:1 company:1 little:1 considering:2 spain:1 provided:1 bounded:4 moreover:2 panel:2 rmax:4 interpreted:2 minimizes:1 finding:1 guarantee:12 formalizes:1 every:2 wiesemann:5 rustem:1 charge:3 finance:1 returning:1 control:6 omit:1 discharging:1 positive:2 safety:2 engineering:1 theocharous:1 approximately:2 inria:1 black:2 verdu:1 limited:3 averaged:1 unique:1 practical:2 investment:1 regret:16 practice:2 differs:3 procedure:2 empirical:4 significantly:7 imprecise:1 word:1 confidence:3 regular:1 suggest:1 cannot:2 undesirable:1 operator:1 storage:3 context:1 influence:1 risk:1 optimize:1 conventional:1 deterministic:12 equivalent:2 customer:3 go:1 straightforward:1 starting:1 rectangular:3 ke:2 formulate:1 miltersen:1 insight:1 notion:1 limiting:1 nominal:1 exact:1 programming:1 us:1 designing:1 expensive:1 solved:1 capture:4 worst:3 observes:1 intuition:2 environment:1 complexity:2 reward:13 battery:5 dynamic:5 ghavamzadeh:3 solving:15 tight:1 petrik:5 purely:1 creates:1 basis:1 various:2 describe:3 kp:1 artificial:2 outcome:2 quite:1 whose:2 stanford:2 solve:4 posed:1 heuristic:2 otherwise:3 online:2 propose:7 interaction:3 product:1 maximal:1 remainder:2 combining:1 realization:6 achieve:1 description:3 convergence:1 double:1 rbc:4 a11:2 leave:1 executing:1 derive:1 develop:1 seroussi:1 sim:9 solves:1 c:1 indicate:4 kuhn:1 safe:18 untested:1 stochastic:2 a12:3 require:1 hx:1 generalization:1 really:1 preliminary:2 proposition:2 pessimistic:2 adjusted:3 extension:1 hold:1 immensely:1 considered:3 exp:4 great:1 scope:1 major:2 achieves:2 adopt:2 a2:6 omitted:1 purpose:1 estimation:1 label:2 currently:1 maker:1 hansen:1 sensitive:1 him:1 minimization:5 mit:1 iyengar:4 clearly:1 always:2 modified:1 zwick:1 derived:2 focus:2 improvement:25 seamlessly:1 tech:1 baseline:65 el:2 inaccurate:1 unlikely:1 entire:1 accept:1 chow:1 her:1 issue:1 arg:9 x11:3 among:1 denoted:2 overall:1 ychow:1 field:1 construct:1 identical:1 adversarially:1 lille:1 represents:6 sell:1 discrepancy:1 future:2 np:9 report:1 purchase:1 inherent:1 few:1 simultaneously:2 resulted:2 individual:1 interest:1 investigate:1 evaluation:3 truly:2 hewlett:1 chain:2 accurate:2 tuple:1 edge:3 necessary:1 rmdp:9 unless:1 varakantham:2 uncertain:12 column:1 modeling:2 markovian:1 maximization:4 tractability:1 applicability:1 deviation:1 uniform:2 reported:1 randomizing:1 varies:1 combined:1 chooses:1 fundamental:2 randomized:8 international:3 systematic:1 physic:1 informatics:1 off:1 together:1 thesis:1 worse:3 return:18 singleton:1 summarized:1 availability:1 satisfy:1 depends:3 hedging:1 performed:1 lab:1 optimistic:3 analyze:7 start:1 investor:1 contribution:1 minimize:2 accuracy:2 kek:1 correspond:1 unifies:1 rejecting:1 deploying:2 definition:5 failure:1 energy:11 proof:5 newly:1 proved:2 knowledge:1 lim:1 improves:2 back:1 attained:1 improved:6 formulation:9 evaluated:1 strongly:1 furthermore:2 marketing:1 until:1 langford:2 hand:1 quality:5 mdp:18 name:1 effect:2 building:3 true:13 counterpart:1 evolution:1 white:1 game:1 illustrative:1 outline:1 mohammad:1 demonstrate:1 l1:1 common:2 pseudocode:1 foreach:1 approximates:1 significant:2 grid:5 mathematics:2 similarly:3 interleaf:2 showed:1 driven:2 scenario:1 certain:1 inequality:3 rep:1 arbitrarily:2 raam:1 seen:1 determine:1 aggregated:1 period:1 dashed:1 desirable:1 reduces:1 match:1 ahmed:2 weissman:1 a1:8 adobe:2 iteration:2 represent:5 achieved:1 want:1 else:2 yinlam:1 source:1 crucial:1 induced:4 contrary:1 call:1 enough:1 affect:1 fit:1 suboptimal:1 idea:4 knowing:1 translates:3 whether:1 motivated:1 returned:1 proceed:1 action:16 generally:1 detailed:2 amount:2 repeating:1 discount:4 simplest:1 outperform:3 percentage:2 notice:2 estimated:3 overly:3 promise:1 key:1 nevertheless:1 pb:18 threshold:1 prevent:1 calandriello:1 run:2 everywhere:1 uncertainty:34 throughout:1 reasonable:1 wu:2 decision:13 appendix:11 bound:11 guaranteed:7 constraint:1 precisely:2 sake:1 min:17 optimality:1 according:1 across:2 slightly:1 kakade:2 rob:3 making:3 ghaoui:2 taken:1 computationally:1 discus:1 turn:1 fail:1 needed:1 tractable:4 end:1 available:2 operation:3 observe:1 robustly:1 batch:6 robustness:1 weinberger:1 thomas:5 assumes:1 include:1 k1:2 build:4 prof:1 especially:1 approximating:1 purchased:1 move:1 occurs:1 strategy:7 concentration:1 parametric:1 separate:1 simulated:2 capacity:1 topic:1 unh:1 considers:1 trivial:1 reason:1 provable:1 assuming:2 modeled:2 minimizing:5 executed:1 sharper:1 negative:4 stated:2 reliably:1 policy:137 unknown:1 perform:3 twenty:1 discretize:1 markov:12 sold:1 precise:1 ninth:1 pair:2 tallec:2 learned:7 barcelona:1 nip:1 beyond:2 suggested:2 usually:1 dynamical:1 challenge:2 max:17 packard:1 marek:1 charging:1 subramanian:2 satisfaction:1 business:1 representing:1 occupancy:1 improve:2 technology:1 mdps:4 risky:1 health:1 deviate:1 literature:2 determining:1 fully:1 loss:10 highlight:2 limitation:2 agent:1 degree:1 gather:1 row:2 arbitrage:6 free:3 side:1 allow:2 understand:1 weaker:1 fall:1 taking:1 benefit:2 plain:1 dimension:1 transition:28 world:1 ordentlich:1 computes:2 reinforcement:2 historical:1 transaction:1 approximate:14 monotonicity:1 decides:1 search:2 quantifies:1 nature:2 robust:35 ignoring:1 improving:1 inventory:1 investigated:2 complex:3 necessarily:1 constructing:1 domain:6 main:7 rh:1 motivation:1 restelli:1 x1:10 xu:2 depicts:1 deployed:1 pirotta:2 sub:2 nilim:2 fails:1 explicit:2 originated:2 theorem:14 specific:1 xt:3 showing:1 exists:3 sequential:4 effectively:2 phd:1 magnitude:2 illustrates:1 horizon:1 margin:2 depicted:1 simply:1 infinitehorizon:1 acm:1 goal:3 formulated:1 price:7 hard:9 change:1 experimentally:2 infinite:1 specifically:1 hampshire:1 conservative:6 hospital:2 pas:1 called:1 experimental:3 total:1 player:1 formally:1 brevity:2 evaluate:2 |
5,852 | 6,295 | Can Active Memory Replace Attention?
?ukasz Kaiser
Google Brain
[email protected]
Samy Bengio
Google Brain
[email protected]
Abstract
Several mechanisms to focus attention of a neural network on selected parts of its
input or memory have been used successfully in deep learning models in recent
years. Attention has improved image classification, image captioning, speech
recognition, generative models, and learning algorithmic tasks, but it had probably
the largest impact on neural machine translation.
Recently, similar improvements have been obtained using alternative mechanisms
that do not focus on a single part of a memory but operate on all of it in parallel,
in a uniform way. Such mechanism, which we call active memory, improved over
attention in algorithmic tasks, image processing, and in generative modelling.
So far, however, active memory has not improved over attention for most natural
language processing tasks, in particular for machine translation. We analyze this
shortcoming in this paper and propose an extended model of active memory that
matches existing attention models on neural machine translation and generalizes
better to longer sentences. We investigate this model and explain why previous
active memory models did not succeed. Finally, we discuss when active memory
brings most benefits and where attention can be a better choice.
1
Introduction
Recent successes of deep neural networks have spanned many domains, from computer vision [1] to
speech recognition [2] and many other tasks. In particular, sequence-to-sequence recurrent neural
networks (RNNs) with long short-term memory (LSTM) cells [3] have proven especially successful
at natural language processing (NLP) tasks, including machine translation [4, 5, 6].
The basic sequence-to-sequence architecture for machine translation is composed of an RNN encoder
which reads the source sentence one token at a time and transforms it into a fixed-sized state vector.
This is followed by an RNN decoder, which generates the target sentence, one token at a time, from
the state vector. While a pure sequence-to-sequence recurrent neural network can already obtain good
translation results [4, 6], it suffers from the fact that the whole sentence to be translated needs to be
encoded into a single fixed-size vector. This clearly manifests itself in the degradation of translation
quality on longer sentences (see Figure 6) and hurts even more when there is less training data [7].
In [5], a successful mechanism to overcome this problem was presented: a neural model of attention.
In a sequence-to-sequence model with attention, one retains the outputs of all steps of the encoder
and concatenates them to a memory tensor. At each step of the decoder, a probability distribution
over this memory is computed and used to estimate a weighted average encoder representation to be
used as input to the next decoder step. The decoder can hence focus on different parts of the encoder
representation while producing tokens. Figure 1 illustrates a single step of this process.
The attention mechanism has proven useful well beyond the machine translation task. Image models
can benefit from attention too; for instance, image captioning models can focus on the relevant parts
of the image when describing it [8]; generative models for images yield especially good results with
attention, as was demonstrated by the DRAW model [9], where the network focuses on a part of the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
new memory = memory
new state
mask over memory
memory
state
Figure 1: Attention model. The state vector is used to compute a probability distribution over memory.
Weighted average of memory elements, with focus on one of them, is used to compute the new state.
image to produce at a given time. Another interesting use-case for the attention mechanism is the
Neural Turing Machine [10], which can learn basic algorithms and generalize beyond the length of
the training instances.
While the attention mechanism is very successful, one important limitation is built into its definition.
Since the attention mask is computed using a Softmax, it by definition tries to focus on a single
element of the memory it is attending to. In the extreme case, also known as hard attention [8], one of
the memory elements is selected and the selection is trained using the REINFORCE algorithm (since
this is not differentiable) [11]. It is easy to demonstrate that this restriction can make some tasks
almost unlearnable for an attention model. For example, consider the task of adding two decimal
numbers, presented one after another like this:
Input
Output
1
3
2
5
5
6
0
5
+
2
3
1
5
A recurrent neural network can have the carry-over in its state and could learn to shift its attention to
subsequent digits. But that is only possible if there are two attention heads, attending to the first and
to the second number. If only a single attention mechanism is present, the model will have a hard
time learning this task and will not generalize properly, as was demonstrated in [12, 13].
A solution to this problem, already proposed in the recent literature (for instance, the Neural GPU
from [12]), is to allow the model to access and change all its memory at each decoding step. We
will call this mechanism an active memory. While it might seem more expensive than attention
models, it is actually not, since the attention mechanism needs to compute an attention score for all
its memory as well in order to focus on the most appropriate part. The approximate complexity of an
attention mechanism is therefore the same as the complexity of the active memory. In practice, we
get step-times around 1.7 second for an active memory model, the Extended Neural GPU introduced
below, and 1.2 second for a comparable model with an attention mechanism. But active memory can
potentially make parallel computations on the whole memory, as depicted in Figure 2.
new memory
memory
Figure 2: Active memory model. The whole memory takes part in the computation at every step.
Each element of memory is active and changes in a uniform way, e.g., using a convolution.
2
Active memory is a natural choice for image models as they usually operate on a canvas. And indeed,
recent works have shown that actively updating the canvas that will be used to produce the final results
can be beneficial. Residual networks [14], the currently best performing model on the ImageNet task,
falls into this category. In [15] it was shown that the weights of different layers of a residual network
can be tied (so it becomes recurrent), without degrading performance. Other models that operate on
the whole canvas at each step were presented in [16, 17]. Both of these models are generative and
show very good performance, yielding better results than the original DRAW model. Thus, the active
memory approach seems to be a better choice for image models.
But what about non-image models? The Neural GPUs [12] demonstrated that active memory yields
superior results on algorithmic tasks. But can it be applied to real-world problems? In particular,
the original attention model brought a great success to natural language processing, esp. to neural
machine translation. Can active memory be applied to this task on a large scale?
We answer this question positively, by presenting an extension of the Neural GPU model that yields
good results for neural machine translation. This model allows us to investigate in depth a number of
questions about the relationship between attention and active memory. We clarify why the previous
active memory model did not succeed on machine translation by showing how it is related to the
inherent dependencies in the target distributions, and we study a few variants of the model that show
how a recurrent structure on the output side is necessary to obtain good results.
2
Active Memory Models
In the previous section, we used the term active memory broadly, referring to any model where every
part of the memory undergoes active change at every step. This is in contrast to attention models
where only a small part of the memory changes at every step, or where the memory remains constant.
The exact implementation of an active change of the memory might vary from model to model. In
the present paper, we will focus on the most common ways this change is implemented that all rely
on the convolution operator.
The convolution acts on a kernel bank and a 3-dimensional tensor. Our kernel banks are 4-dimensional
tensors of shape [kw , kh , m, m], i.e., they contain kw ? kh ? m2 parameters, where kw and kh are
kernel width and height. A kernel bank U can be convolved with a 3-dimensional tensor s of shape
[w, h, m] which results in the tensor U ? s of the same shape as s defined by:
U ? s[x, y, i] =
bkw /2c
bkh /2c
X
X
m
X
s[x + u, y + v, c] ? U [u, v, c, i].
u=b?kw /2c v=b?kh /2c c=1
In the equation above the index x + u might sometimes be negative or larger than the size of s, and
in such cases we assume the value is 0. This corresponds to the standard convolution operator used
in many deep learning toolkits, with zero padding on both sides and stride 1. Using the standard
operator has the advantage that it is heavily optimized and can directly benefit from any new work
(e.g., [18]) on optimizing convolutions.
Given a memory tensor s, an active memory model will produce the next memory s0 by using a
number of convolutions on s and combining them. In the most basic setting, a residual active memory
model will be defined as:
s0 = s + U ? s,
i.e., it will only add to an already existing state.
While residual models have been successful in image analysis [14] and generation [16], they might
suffer from the vanishing gradient problem in the same way as recurrent neural networks do. Therefore, in the same spirit as LSTM gates [3] and GRU gates [19] improve over pure RNNs, one can
introduce convolutional LSTM and GRU operators. Let us focus on the convolutional GRU, which
we define in the same way as in [12], namely:
CGRU(s) = u s + (1 ? u) tanh(U ? (r s) + B), where
(1)
u = ?(U 0 ? s + B 0 ) and r = ?(U 00 ? s + B 00 ).
As a baseline for our investigation of active memory models, we will use the Neural GPU model from
[12], depicted in Figure 3, and defined as follows. The given sequence i = (i1 , . . . , in ) of n discrete
3
o1
i1
..
.
CGRU1
...
CGRU2
CGRU1
..
.
CGRU2
on
in
s0
sn?1
s1
sn
Figure 3: Neural GPU with 2 layers and width w = 3 unfolded in time.
symbols from {0, . . . , I} is first embedded into the tensor s0 by concatenating the vectors obtained
from an embedding lookup of the input symbols into its first column. More precisely, we create the
starting tensor s0 of shape [w, n, m] by using an embedding matrix E of shape [I, m] and setting
s0 [0, k, :] = E[ik ] (in python notation) for all k = 1 . . . n (here i1 , . . . , in is the input). All other
elements of s0 are set to 0. Then, we apply l different CGRU gates in turn for n steps to produce the
final tensor sfin :
st+1 = CGRUl (CGRUl?1 . . . CGRU1 (st ) . . . ) and sfin = sn .
The result of a Neural GPU is produced by multiplying each item in the first column of sfin by an
output matrix O to obtain the logits lk = Osfin [0, k, :] and then selecting the largest one: ok =
argmax(lk ). During training we use the standard loss function, i.e., we compute a Softmax over the
logits lk and use the negative log probability of the target as the loss.
2.1
The Markovian Neural GPU
The baseline Neural GPU model yields very poor results on neural machine translation: its per-word
perplexity on WMT1 does not go below 30 (good models on this task go below 4), and its BLEU
scores are also very bad (below 5, while good models are higher than 20). Which part of the model is
responsible for such bad results?
It turns out that the main culprit is the output generator. As one can see in Figure 3 above, every
output symbol is generated independently of all other output symbols, conditionally only on the state
sfin . This is fine for learning purely deterministic functions, like the toy tasks the Neural GPU was
designed for. But it does not work for harder real-world problems, where there could be multiple
possible outputs for each input.
The most basic way to mitigate this problem is to make every output symbol depend on the previous
output. This only changes the output generation, not the state, so the definition of the model is the
same as above until sfin . The result is then obtained by multiplying by an output matrix O each item
from the first column of sfin concatenated with the embedding of the previous output generated by
another embedding matrix E 0 :
lk = O concat(sfin [0, k, :], E 0 ok?1 ).
For k = 0 we use a special symbol ok?1 = GO and, to get the output, we select ok = argmax(lk ).
During training we use the standard loss function, i.e., we compute a Softmax over the logits lk and
use the negative log probability of the target as the loss. Also, as is standard in recurrent networks [4],
we use teacher forcing, i.e., during training we provide the true output label as ok?1 instead of using
the previous output generated by the model. This means that the loss incurred from generating ok
does not directly influence the value of ok?1 . We depict this model in Figure 4.
2.2
The Extended Neural GPU
The Markovian Neural GPU yields much better results on neural machine translation than the baseline
model: its per-word perplexity reaches about 12 and its BLEU scores improve a bit. But these results
are still far from those achieved by models with attention.
1
See Section 3 for more details on the experimental setting.
4
o1
o2
o3
i1
..
.
CGRU1
...
CGRU2
CGRU1
..
CGRU2
.
on
in
s0
sn?1
s1
sn
Figure 4: Markovian Neural GPU. Each output ok is conditionally dependent on the final tensor
sfin = sn and the previous output symbol ok?1 .
p0
o1
p1
o2
p2
...
pn?1 on
i1
..
.
CGRU . . . CGRU
CGRU
CGRUd
CGRUd . . . CGRUd
CGRUd
in
s0
s1
sn = d0
d1
d2
dn
Figure 5: Extended Neural GPU with active memory decoder. See the text below for definition.
Could it be that the Markovian dependence of the outputs is too weak for this problem, that a full
recurrent dependence of the state is needed for good performance? We test this by extending the
baseline model with an active memory decoder, as depicted in Figure 5.
The definition of the Extended Neural GPU follows the baseline model until sfin = sn . We consider
sn as the starting point for the active memory decoder, i.e., we set d0 = sn . In the active memory
decoder we will also use a separate output tape tensor p of the same shape as d0 , i.e., p is of shape
[w, n, m]. We start with p0 set to all 0 and define the decoder states by
dt+1 = CGRUdl (CGRUdl?1 (. . . CGRUd1 (dt , pt ) . . . , pt ), pt ),
where CGRUd is defined just like CGRU in Equation (1) but with additional input as highlighted
below in bold:
CGRUd (s, p) = u s + (1 ? u) tanh(U ? (r s) + W ? p + B), where
u = ?(U 0 ? s + W 0 ? p + B 0 )
and r = ?(U 00 ? s + W 00 ? p + B 00 ).
(2)
We generate the k-th output by multiplying the k-th vector in the first column of dk by the output
matrix O, i.e., lk = O dk [0, k, :]. We then select ok = argmax(lk ). The symbol ok is then embedded
back into a dense representation using another embedding matrix E 0 and we put it into the k-th place
on the output tape p, i.e., we define
pk+1 = pk
pk [0, k, :] ? E 0 ok .
with
In this way, we accumulate (embedded) outputs step-by-step on the output tape p. Each step pt has
access to all outputs produced in all steps before t.
Again, it is important to note that during training we use teacher forcing, i.e., we provide the true
output labels for ok instead of using the outputs generated by the model.
5
2.3
Related Models
A convolutional architecture has already been used to obtain good results in word-level neural
machine translation in [20] and more recently in [21]. These model use a standard RNN on top of
the convolution to generate the output and avoid the output dependence problem in this way. But
the state of this RNN has a fixed size, and in the first one the sentence representation generated by
the convolutional network is also a fixed-size vector. Therefore, while superficially similar to active
memory, these models are more similar to fixed-size memory models. The first one suffers from all
the limitations of sequence-to-sequence models without attention [4, 6] that we discussed before.
Another recently introduced model, the Grid LSTM [22], might look less related to active memory,
as it does not use convolutions at all. But in fact it is to a large extend an active memory model ? the
memory is on the diagonal of the grid of the running LSTM cells. The Reencoder architecture for
neural machine translation introduced in that paper is therefore related to the Extended Neural GPU.
But it differs in a number of ways. For one, the input is provided step-wise, so the network cannot
start processing the whole input in parallel, as in our model. The diagonal memory changes in size
and the model is a 3-dimensional grid, which might not be necessary for language processing. The
Reencoder also does not use convolutions and this is crucial for performance. The experiments from
[22] are only performed on a very small dataset of 44K short sentences. This is almost 1000 times
smaller than the dataset we are experimenting with and makes is unclear whether Grid LSTMs can be
applied to large-scale real-world tasks.
In image processing, in addition to the captioning [8] and generative models [16, 17] that we
mentioned before, there are several other active memory models. They use convolutional LSTMs, an
architecture similar to CGRU, and have recently been used for weather prediction [23] and image
compression [24], in both cases surpassing the state-of-the-art.
3
Experiments
Since all components of our models (defined above) are differentiable, we can train them using any
stochastic gradient descent optimizer. For the results presented in this paper we used the Adam
optimizer [25] with ? = 10?4 and gradients norm clipped to 1. The number of layers was set to
l = 2, the width of the state tensors was constant at w = 4, the number of maps was m = 512, and
the convolution kernels width and height was always kw = kh = 3.2
As our main test, we train the models discussed above and a baseline attention model on the WMT?14
English-French translation task. This is the same task that was used to introduce attention [5], but ?
to avoid the problem with the UNK token ? we spell-out each word that is not in the vocabulary. More
precisely, we use a 32K vocabulary that includes all characters and the most common words, and
every word that is not in the vocabulary is spelled-out letter-by-letter. We also include a special SPACE
symbol, which is used to mark spaces between characters (we assume spaces between words). We
train without any data filtering on the WMT?14 corpus and test on the WMT?14 test set (newstest?14).
As a baseline, we use a GRU model with attention that is almost identical to the original one from
[5], except that it has 2 layers of GRU cells, each with 1024 units. Tokens from the vocabulary are
embedded into vectors of size 512, and attention is put on the top layer. This model is identical as the
one in [7], except that is uses GRU cells instead of LSTM cells. It has about 120M parameters, while
our Extended Neural GPU model has about 110M parameters. Better results have been reported on
this task with attention models with more parameters, but we aim at a baseline similar in size to the
active memory model we are using.
When decoding from the Extendend Neural GPU model, one has to provide the expected size of the
output, as it determines the size of the memory. We test all sizes between input size and double the
input size using a greedy decoder and pick the result with smallest log-perplexity (highest likelihood).
This is expensive, so we only use a very basic beam-search with beam of size 2 and no length
normalization. It is possible to reduce the cost by predicting the output length: we tried a basic
estimator based just on input sentence length and it decreased the BLEU score by 0.3. Better training
and decoding could remove the need to predict output length, but we leave this for future work.
2
Our model was implemented using TensorFlow [26]. Its code is available as open-source at https:
//github.com/tensorflow/models/tree/master/neural_gpu/.
6
Model
Neural GPU
Markovian Neural GPU
Extended Neural GPU
GRU+Attention
Perplexity (log)
30.1 (3.5)
11.8 (2.5)
3.3 (1.19)
3.4 (1.22)
BLEU
<5
<5
29.6
26.4
Table 1: Results on the WMT English->French translation task. We provide the average per-word
perplexity (and its logarithm in parenthesis) and the BLEU score. Perplexity is computed on the test
set with the ground truth provided, so it do not depend on the decoder.
For the baseline model, we use a full beam-search decoder with beam of size 12, length normalization
and an attention coverage penalty in the decoder. This is a basic penalty that pushes the decoder to
attend to all words in the source sentence. We experimented with more elaborate methods following
[27] but it did not improve our results. The parameters for length normalization and coverage penalty
are tuned on the development set (newstest?13). The final BLEU scores and per-word perplexities for
these different models are presented in Table 1. Worse models have higher variance of their BLEU
scores, so we only write < 5 for these models.
One can see from Table 1 that an active memory model can indeed match an attention model on
the machine translation task, even with slightly fewer parameters. It is interesting to note that the
active memory model does not need the length normalization that is necessary for the attention model
(esp. when rare words are spelled). We conjecture that active memory inherently generalizes better
from shorter examples and makes decoding easier, a welcome news, since tuning decoders is a large
problem in sequence-to-sequence models.
In addition to the summary results from Table 1, we analyzed the performance of the models on
sentences of different lengths. This was the key problem solved by the attention mechanism, so it is
worth asking if active memory solves it as well. In Figure 6 we plot the BLEU scores on the test set
for sentences in each length bucket, bucketing by 10, i.e., for lengths (0, 10], (10, 20] and so on. We
plot the curves for the Extended Neural GPU model, the long baseline GRU model with attention,
and ? for comparison ? we add the numbers for a non-attention model from Figure 2 of [5]. (Note
that these numbers are for a model that uses different tokenization, so they are not fully comparable,
but still provide a context.)
As can be seen, our active memory model is less sensitive to sentence length than the attention
baseline. It indeed solves the problem that the attention mechanism was designed to solve.
Parsing. In addition to the main large-scale translation task, we tested the Extended Neural GPU
on English constituency parsing, the same task as in [7]. We only used the standard WSJ dataset for
training. It is small by neural network standards, as it contains only 40K sentences. We trained the
Extended Neural GPU with the same settings as above, only with m = 256 (instead of m = 512)
and dropout of 30% in each step. During decoding, we selected well-bracketed outputs with the right
number of POS-tags from all lengths considered. Evaluated with the standard EVALB tool on the
standard WSJ 23 test set, we got 85.1 F1 score. This is lower than 88.3 reported in [7], but we didn?t
use any of their optimizations (no early stopping, no POS-tag substitution, no special tuning). Since a
pure sequence-to-sequence model has F1 score well below 70, this shows that the Extended Neural
GPU is versatile and can learn and generalize well even on small data-sets.
4
Discussion
To better understand the main shortcoming of previous active memory models, let us look at the
average log-perplexities of different attention models in Table 1. A pure Neural GPU model yields
3.5, a Markovian one yields 2.5, and only a model with full dependence, trained with teacher forcing,
achieves 1.3. The recurrent dependence in generating the output distribution turns out to be the key
to achieving good performance.
We find it illuminating that the issue of dependencies in the output distribution can be disentangled
from the particularities of the model or model class. In earlier works, such dependence (and training
with teacher forcing) was always used in LSTM and GRU models, but very rarely in other kinds
7
30
27
24
BLEU score
21
18
Extended Neural GPU
GRU+Attention
15 0
No Attention
10
20
30
40
50
60
Sentence length
Figure 6: BLEU score (the higher the better) vs source sentence length.
models. We show that it can be beneficial to consider this issue separately from the model architecture.
It allows us to create the Extended Neural GPU and this way of thinking might also prove fruitful for
other classes of models.
When the issue of recurrent output dependencies is addressed, as we do in the Extended Neural GPU,
an active memory model can indeed match or exceed attention models on a large-scale real-world
task. Does this mean we can always replace attention by active memory?
The answer could be yes for the case of soft attention. Its cost is approximately the same as active
memory, it performs much worse on some tasks like learning algorithms, and ? with the introduction
of the Extended Neural GPU ? we do not know of a task where it performs clearly better.
Still, an attention mask is a very natural concept, and it is probable that some tasks can benefit from
a selector that focuses on single items by definition. This is especially obvious for hard attention:
it can be used over large memories with potentially much less computational cost than an active
memory, so it might be indispensable for devising long-term memory mechanisms. Luckily, active
memory and attention are not exclusive, and we look forward to investigating models that combine
these mechanisms.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolutional
neural network. In Advances in Neural Information Processing Systems, 2012.
[2] George E. Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks
for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech & Language Processing,
20(1):30?42, 2012.
[3] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[4] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In
Advances in Neural Information Processing Systems, pages 3104?3112, 2014.
[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. CoRR, abs/1409.0473, 2014.
8
[6] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua
Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation.
CoRR, abs/1406.1078, 2014.
[7] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in
Neural Information Processing Systems, 2015.
[8] Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S.
Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention.
In ICML, 2015.
[9] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A
recurrent neural network for image generation. CoRR, abs/1502.04623, 2015.
[10] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
[11] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine Learning, 8:229??256, 1992.
[12] ?ukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning
Representations (ICLR), 2016.
[13] A. Joulin and T. Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances
in Neural Information Processing Systems, (NIPS), 2015.
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In CVPR, 2016.
[15] Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks
and visual cortex. CoRR, abs/1604.03640, 2016.
[16] Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot
generalization in deep generative models. CoRR, abs/1603.05106, 2016.
[17] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards
conceptual compression. CoRR, abs/1604.08772, 2016.
[18] Andrew Lavin and Scott Gray. Fast algorithms for convolutional neural networks. CoRR, abs/1509.09308,
2015.
[19] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation:
Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.
[20] Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings EMNLP
2013, pages 1700?1709, 2013.
[21] Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. Encoding source
language with convolutional neural network for machine translation. In ACL, pages 20?30, 2015.
[22] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In International
Conference on Learning Representations, 2016.
[23] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai kin Wong, and Wang chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural
Information Processing Systems, 2015.
[24] George Toderici, Sean M. O?Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja,
Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks.
In International Conference on Learning Representations, 2016.
[25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,
2014.
[26] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado,
Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey
Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg,
Dan Man?, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens,
Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda
Vi?gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.
Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2015.
[27] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling coverage for neural machine
translation. CoRR, abs/1601.04811, 2016.
9
| 6295 |@word compression:3 seems:1 norm:1 open:1 d2:1 tried:1 p0:2 pick:1 versatile:1 shot:1 harder:1 carry:1 substitution:1 contains:1 score:12 selecting:1 jimenez:3 liu:3 tuned:1 o2:2 existing:2 steiner:1 com:3 culprit:1 diederik:1 gpu:29 parsing:2 ronald:1 subsequent:1 devin:1 shape:7 remove:1 designed:2 plot:2 depict:1 v:1 bart:1 generative:6 selected:3 greedy:1 item:3 concat:1 fewer:1 devising:1 ivo:5 isard:1 vanishing:1 short:4 zhang:1 height:2 wierstra:3 dn:1 olah:1 ik:1 abadi:1 prove:1 yuan:1 combine:1 dan:1 introduce:2 mask:3 expected:1 indeed:4 tomaso:1 p1:1 kiros:1 brain:2 salakhutdinov:1 unfolded:1 toderici:1 becomes:1 spain:1 provided:2 notation:1 precipitation:1 didn:1 what:1 kind:1 degrading:1 sung:1 mitigate:1 every:7 act:1 unit:1 wayne:1 producing:1 kelvin:1 danihelka:5 before:3 attend:2 shumeet:1 esp:2 encoding:1 toolkits:1 meng:1 jiang:1 koo:1 approximately:1 might:8 rnns:2 blunsom:1 acl:1 responsible:1 practice:1 differs:1 digit:1 rnn:5 yan:1 got:1 weather:1 word:11 pre:1 get:2 cannot:1 selection:1 operator:4 put:2 context:2 influence:1 acero:1 wong:1 restriction:1 fruitful:1 deterministic:1 demonstrated:3 shi:1 map:1 phil:1 go:3 attention:55 starting:2 independently:1 sepp:1 jimmy:2 williams:1 pure:4 matthieu:1 m2:1 attending:2 estimator:1 d1:1 spanned:1 shlens:1 disentangled:1 embedding:5 hurt:1 target:4 pt:4 heavily:1 exact:1 caption:1 us:2 samy:1 goodfellow:1 kunal:1 element:5 recognition:4 expensive:2 updating:1 mike:1 solved:1 wang:3 news:1 sun:1 highest:1 mentioned:1 complexity:2 nowcasting:1 trained:4 depend:2 purely:1 translated:1 po:2 schwenk:1 train:3 fast:1 shortcoming:2 zemel:1 tell:1 kalchbrenner:2 encoded:1 larger:1 solve:1 cvpr:1 particularity:1 encoder:6 grammar:1 highlighted:1 itself:1 jointly:1 final:4 shakir:1 sequence:17 differentiable:2 advantage:1 net:1 propose:1 tu:1 relevant:1 combining:1 translate:1 kh:5 sutskever:5 double:1 extending:1 captioning:3 produce:4 generating:2 adam:2 spelled:2 leave:1 wsj:2 karol:3 recurrent:15 andrew:2 damien:1 solves:2 p2:1 implemented:2 coverage:3 stochastic:2 luckily:1 jonathon:1 f1:2 generalization:1 investigation:1 merrienboer:2 probable:1 ryan:1 extension:1 clarify:1 around:1 considered:1 ground:1 great:1 algorithmic:4 predict:1 rgen:1 achieves:1 vary:1 optimizer:2 smallest:1 early:1 ruslan:1 label:2 currently:1 tanh:2 sensitive:1 largest:2 create:2 successfully:1 tool:1 weighted:2 brought:1 clearly:2 always:3 aim:1 pn:1 avoid:2 evalb:1 rezende:3 focus:11 improvement:1 properly:1 modelling:1 likelihood:1 experimenting:1 contrast:1 baseline:11 dependent:2 stopping:1 foreign:1 ukasz:2 i1:5 issue:3 classification:2 unk:1 development:1 art:1 softmax:3 special:3 tokenization:1 lukaszkaiser:1 identical:2 kw:5 holger:1 look:3 yu:2 icml:1 thinking:1 future:1 yoshua:3 connectionist:1 inherent:1 few:1 richard:1 composed:1 argmax:3 jeffrey:1 ab:11 investigate:2 zheng:1 benoit:1 analyzed:1 extreme:1 yielding:1 andy:1 necessary:3 poggio:1 shorter:1 tree:1 logarithm:1 instance:3 column:4 earlier:1 soft:1 markovian:6 asking:1 modeling:1 retains:1 phrase:1 cost:3 rare:1 uniform:2 krizhevsky:1 successful:4 too:2 reported:2 dependency:3 answer:2 teacher:4 kudlur:1 cho:4 referring:1 st:2 lstm:8 international:3 dong:1 decoding:5 michael:1 ashish:1 ilya:4 sukthankar:1 again:1 rafal:1 emnlp:1 worse:2 lukasz:1 actively:1 toy:1 li:3 lookup:1 stride:1 bold:1 includes:1 bracketed:1 vi:1 performed:1 try:1 analyze:1 start:2 parallel:3 jia:1 greg:2 convolutional:9 variance:1 yield:7 covell:1 yes:1 dean:1 generalize:3 weak:1 zhengdong:2 vincent:2 produced:2 craig:1 ren:1 lu:2 multiplying:3 worth:1 explain:1 reach:1 suffers:2 wai:1 definition:6 petrov:1 derek:1 mohamed:1 tucker:1 obvious:1 dataset:3 manifest:1 wicke:1 sean:1 actually:1 back:1 ok:13 higher:3 dt:2 danilo:3 improved:3 rahul:1 evaluated:1 just:2 until:2 canvas:3 lstms:2 google:4 french:2 undergoes:1 brings:1 quality:1 gray:1 michele:1 lei:1 hwang:1 contain:1 true:2 logits:3 concept:1 spell:1 hence:1 kyunghyun:3 vasudevan:1 read:1 bkh:1 moore:1 conditionally:2 fethi:1 during:5 width:4 irving:1 davis:1 levenberg:1 o3:1 presenting:1 demonstrate:1 performs:2 image:18 wise:1 recently:4 superior:1 common:2 discussed:2 extend:1 he:1 surpassing:1 accumulate:1 bougares:1 jozefowicz:1 xingjian:1 tuning:2 grid:5 language:7 had:1 wmt:4 access:2 longer:2 cortex:1 add:2 align:1 pete:1 recent:4 optimizing:1 wattenberg:1 perplexity:8 forcing:4 indispensable:1 schmidhuber:1 sherry:1 success:2 fernanda:1 seen:1 additional:1 george:2 lavin:1 deng:1 xiangyu:1 corrado:1 multiple:1 full:3 d0:3 match:3 long:5 parenthesis:1 impact:1 prediction:1 variant:1 basic:7 liao:1 vision:1 heterogeneous:1 yeung:1 kernel:5 sometimes:1 normalization:4 agarwal:1 achieved:1 cell:5 beam:4 hochreiter:1 addition:3 monga:1 fine:1 separately:1 decreased:1 addressed:1 source:5 jian:1 crucial:1 operate:3 warden:1 probably:1 bahdanau:2 spirit:1 seem:1 call:2 yang:1 exceed:1 bengio:6 easy:1 architecture:5 malley:1 reduce:1 barham:1 shift:1 whether:1 bridging:1 padding:1 manjunath:1 penalty:3 suffer:1 speech:4 shaoqing:1 tape:3 deep:7 useful:1 transforms:1 welcome:1 category:1 dit:1 constituency:1 generate:2 http:1 per:4 broadly:1 discrete:1 write:1 bkw:1 key:2 harp:1 achieving:1 yangqing:1 dahl:1 nal:2 year:1 turing:2 letter:2 master:1 talwar:1 place:1 almost:3 clipped:1 draw:3 comparable:2 bit:1 dropout:1 layer:5 mingxuan:1 followed:1 courville:1 precisely:2 alex:5 newstest:2 tag:2 generates:1 performing:1 mikolov:1 martin:2 gpus:2 conjecture:1 poor:1 bucketing:1 beneficial:2 smaller:1 slightly:1 character:2 s1:3 quoc:1 bucket:1 equation:2 remains:1 discus:1 describing:1 mechanism:16 turn:3 needed:1 know:1 gulcehre:1 generalizes:2 available:1 brevdo:1 apply:1 minnen:1 appropriate:1 alternative:1 gate:3 convolved:1 original:3 top:2 running:1 nlp:1 include:1 concatenated:1 especially:3 murray:1 gregor:3 tensor:12 already:4 question:2 kaiser:4 dependence:6 exclusive:1 diagonal:2 unclear:1 gradient:4 iclr:1 separate:1 reinforce:1 decoder:17 chris:1 bleu:10 dzmitry:1 length:15 o1:3 index:1 relationship:1 code:1 decimal:1 potentially:2 hao:1 negative:3 ba:2 implementation:1 convolution:10 daan:3 caglar:1 descent:1 jin:1 gas:1 extended:16 hinton:2 head:1 stack:1 introduced:3 david:1 namely:1 gru:10 sentence:15 imagenet:2 optimized:1 xiaoqiang:1 tensorflow:3 barcelona:1 kingma:1 nip:2 beyond:2 below:7 usually:1 pattern:1 scott:1 sanjay:1 built:1 including:1 memory:76 natural:5 rely:1 predicting:1 residual:6 improve:3 github:1 lk:8 woo:1 sn:10 text:1 eugene:1 literature:1 python:1 graf:3 embedded:4 loss:5 fully:1 interesting:2 limitation:2 generation:4 filtering:1 proven:2 geoffrey:2 generator:1 illuminating:1 incurred:1 vanhoucke:1 s0:9 bank:3 translation:25 wenbin:1 summary:1 token:5 english:3 side:2 allow:1 understand:1 vv:1 fall:1 benefit:4 van:2 overcome:1 depth:1 vocabulary:5 world:4 superficially:1 curve:1 distributed:1 forward:1 reinforcement:1 far:2 transaction:1 approximate:1 selector:1 hang:2 active:45 investigating:1 corpus:1 conceptual:1 search:2 continuous:1 why:2 table:5 learn:4 concatenates:1 inherently:1 domain:1 did:3 pk:3 main:4 dense:1 joulin:1 qianli:1 whole:5 paul:2 zhourong:1 positively:1 xu:1 augmented:1 elaborate:1 besse:1 inferring:1 concatenating:1 tied:1 zhifeng:1 kin:1 ian:1 bad:2 showing:1 ghemawat:1 symbol:9 dk:2 experimented:1 chun:1 frederic:1 adding:1 corr:11 illustrates:1 push:1 gap:1 easier:1 chen:2 vijay:1 depicted:3 visual:2 josh:1 vinyals:3 kaiming:1 corresponds:1 truth:1 determines:1 mart:1 succeed:2 sized:1 towards:1 replace:2 man:1 hard:3 change:8 baluja:1 except:2 degradation:1 experimental:1 qun:1 citro:1 rarely:1 select:2 aaron:1 mark:1 rajat:1 oriol:2 audio:1 tested:1 schuster:1 unlearnable:1 |
5,853 | 6,296 | Kronecker Determinantal Point Processes
Zelda Mariet
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Suvrit Sra
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Determinantal Point Processes (DPPs) are probabilistic models over all subsets
a ground set of N items. They have recently gained prominence in several applications that rely on ?diverse? subsets. However, their applicability to large
problems is still limited due to O(N 3 ) complexity of core tasks such as sampling
and learning. We enable efficient sampling and learning for DPPs by introducing
K RON D PP, a DPP model whose kernel matrix decomposes as a tensor product of
multiple smaller kernel matrices. This decomposition immediately enables fast
exact sampling. But contrary to what one may expect, leveraging the Kronecker
product structure for speeding up DPP learning turns out to be more difficult. We
overcome this challenge, and derive batch and stochastic optimization algorithms
for efficiently learning the parameters of a K RON D PP.
1
Introduction
Determinantal Point Processes (DPPs) are discrete probability models over the subsets of a ground
set of N items. They provide an elegant model to assign probabilities to an exponentially large
sample, while permitting tractable (polynomial time) sampling and marginalization. They are often
used to provide models that balance ?diversity? and quality, characteristics valuable to numerous
problems in machine learning and related areas [17].
The antecedents of DPPs lie in statistical mechanics [24], but since the seminal work of [15] they
have made inroads into machine learning. By now they have been applied to a variety of problems such as document and video summarization [6, 21], sensor placement [14], recommender
systems [31], and object retrieval [2]. More recently, they have been used to compress fullyconnected layers in neural networks [26] and to provide optimal sampling procedures for the Nystr?m method [20]. The more general study of DPP properties has also garnered a significant amount
of interest, see e.g., [1, 5, 7, 12, 16?18, 23].
However, despite their elegance and tractability, widespread adoption of DPPs is impeded by the
O(N 3 ) cost of basic tasks such as (exact) sampling [12, 17] and learning [10, 12, 17, 25]. This
cost has motivated a string of recent works on approximate sampling methods such as MCMC
samplers [13, 20] or core-set based samplers [19]. The task of learning a DPP from data has received
less attention; the methods of [10, 25] cost O(N 3 ) per iteration, which is clearly unacceptable for
realistic settings. This burden is partially ameliorated in [9], who restrict to learning low-rank DPPs,
though at the expense of being unable to sample subsets larger than the chosen rank.
These considerations motivate us to introduce K RON D PP, a DPP model that uses Kronecker (tensor)
product kernels. As a result, K RON D PP enables us to learn large sized DPP kernels, while also
permitting efficient (exact and approximate) sampling. The use of Kronecker products to scale
matrix models is a popular and effective idea in several machine-learning settings [8, 27, 28, 30].
But as we will see, its efficient execution for DPPs turns out to be surprisingly challenging.
To make our discussion more concrete, we recall some basic facts now. Suppose we have a ground
set of N items Y = {1, . . . , N }. A discrete DPP over Y is a probability measure P on 2Y
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
parametrized by a positive definite matrix K (the marginal kernel) such that 0 ? K ? I, so that for
any Y ? 2Y drawn from P, the measure satisfies
?A ? Y,
P(A ? Y ) = det(KA ),
(1)
where KA is the submatrix of K indexed by elements in A (i.e., KA = [Kij ]i,j?A ). If a DPP
with marginal kernel K assigns nonzero probability to the empty set, the DPP can alternatively be
parametrized by a positive definite matrix L (the DPP kernel) so that
P(Y ) ? det(LY )
=?
P(Y ) =
det(LY )
.
det(L + I)
(2)
A brief manipulation (see e.g., [17, Eq. 15]) shows that when the inverse exists, L = K(I ? K)?1 .
The determinants, such as in the normalization constant in (2), make operations over DPPs typically
cost O(N 3 ), which is a key impediment to their scalability.
Therefore, if we consider a class of DPP kernels whose structure makes it easy to compute determinants, we should be able to scale up DPPs. An alternative approach towards scalability is to restrict
the size of the subsets, as done in k-DPP [16] or when using rank-k DPP kernels [9] (where k ? N ).
Without further assumptions, both approaches still require O(N 3 ) preprocessing for exact sampling;
another caveat is that they limit the DPP model by assigning zero probabilities to sets of cardinality
greater than k.
In contrast, K RON D PP uses a kernel matrix of the form L = L1 ? . . . ? Lm , where each subkernel Li is a smaller positive definite matrix. This decomposition has two key advantages: (i) it
significantly lowers the number of parameters required to specify the DPP from N 2 to O(N 2/m )
(assuming the sub-kernels are roughly the same size); and (ii) it enables fast sampling and learning.
For ease of exposition, we describe specific details of K RON D PP for m = 2; as will become clear
from the analysis, typically the special cases m = 2 and m = 3 should suffice to obtain lowcomplexity sampling and learning algorithms.
Contributions. Our main contribution is the K RON D PP model along with efficient algorithms for
sampling from it and learning a Kronecker factored kernel. Specifically, inspired by the algorithm
of [25], we develop K R K-P ICARD (Kronecker-Kernel Picard), a block-coordinate ascent procedure
that generates a sequence of Kronecker factored estimates of the DPP kernel while ensuring monotonic progress on its (difficult, nonconvex) objective function. More importantly, we show how
to implement K R K-P ICARD to run in O(N 2 ) time when implemented as a batch method, and in
O(N 3/2 ) time and O(N ) space, when implemented as a stochastic method. As alluded to above,
unlike many other uses of Kronecker models, K RON D PP does not admit trivial scaling up, largely
due to extensive dependence of DPPs on arbitrary submatrices of the DPP kernel. An interesting
theoretical nugget that arises from our analysis is the combinatorial problem that we call subset clustering, a problem whose (even approximate) solution can lead to further speedups of our algorithms.
2
Preliminaries
We begin by recalling basic properties of Kronecker products needed in our analysis; we omit proofs
of these well-known results for brevity. The Kronecker (tensor) product of A ? Rp?q with B ?
Rr?s two matrices is defined as the pr ? qs block matrix A ? B = [aij B]p,q
i,j=1 .
We denote the block aij B in A ? B by (A ? B)(ij) for any valid pair (i, j), and extend the notation
to non-Kronecker product matrices to indicate the submatrix of size r ? s at position (i, j).
Proposition 2.1. Let A, B, C, D be matrices of sizes so that AC and BD are well-defined. Then,
(i) If A, B ? 0, then, A ? B ? 0;
(ii) If A and B are invertible then so is A ? B, with (A ? B)?1 = A?1 ? B ?1 ;
(iii) (A ? B)(C ? D) = (AC) ? (BD).
An important consequence of Prop. 2.1(iii) is the following corollary.
Corollary 2.2. Let A = PA DA PA? and B = PB DB PB? be the eigenvector decompositions of A
and B. Then, A ? B diagonalizes as (PA ? PB )(DA ? DB )(PA ? PB )? .
2
We will also need the notion of partial trace operators, which are perhaps less well-known:
Definition 2.3. Let A ? RN1 N2 ?N1 N2 . The partial traces Tr1 (A) and Tr2 (A) are defined as
follows:
?N1
[
]
Tr1 (A) := Tr(A(ij) ) 1?i,j?N ? RN1 ?N1 ,
Tr2 (A) :=
A(ii) ? RN2 ?N2 .
i=1
1
The action of partial traces is easy to visualize: indeed, Tr1 (A ? B) = Tr(B)A and Tr2 (A ? B) =
Tr(A)B. For us, the most important property of partial trace operators is their positivity.
Proposition 2.4. Tr1 and Tr2 are positive operators, i.e., for A ? 0, Tr1 (A) ? 0 and Tr2 (A) ? 0.
Proof. Please refer to [4, Chap. 4].
3
Learning the kernel matrix for K RON D PP
In this section, we consider the key difficult task for K RON D PPs: learning a Kronecker product
kernel matrix from n observed subsets Y1 , . . . , Yn . Using the definition (2) of P(Yi ), maximumlikelihood learning of a DPP with kernel L results in the optimization problem:
n
1?
arg max ?(L),
?(L) =
(log det(LYi ) ? log det(L + I)) .
(3)
L?0
n i=1
This problem is nonconvex and conjectured to be NP-hard [15, Conjecture 4.1]. Moreover the
constraint L ? 0 is nontrivial to handle. Writing Ui as the indicator matrix for Yi of size N ? |Yi |
so that LYi = Ui? LUi , the gradient of ? is easily seen to be
1 ?n
?1
?
.
(4)
Ui L?1
? := ??(L) =
Yi Ui ? (L + I)
i=1
n
In [25], the authors derived an iterative method (?the Picard iteration?) for computing an L that
solves ? = 0 by running the simple iteration
L ? L + L?L.
(5)
Moreover, iteration (5) is guaranteed to monotonically increase the log-likelihood ? [25]. But these
benefits accrue at a cost of O(N 3 ) per iteration, and furthermore a direct application of (5) cannot
guarantee the Kronecker structure required by K RON D PP.
3.1
Optimization algorithm
Our aim is to obtain an efficient algorithm to (locally) optimize (3). Beyond its nonconvexity, the
Kronecker structure L = L1 ? L2 imposes another constraint. As in [25] we first rewrite ? as a
function of S = L?1 , and re-arrange terms to write it as
(
)
1 ?n
?(S) = log det(S) +
log det Ui? S ?1 Ui ? log det(I + S) .
(6)
i=1
| {z } |n
{z
}
f (S)
g(S)
It is easy to see that f is concave, while a short argument shows that g is convex [25]. An appeal to
the convex-concave procedure [29] then shows that updating S by solving ?f (S (k+1) )+?g(S (k) ) =
0, which is what (5) does [25, Thm. 2.2], is guaranteed to monotonically increase ?.
But for K RON D PP this idea does not apply so easily: due the constraint L = L1 ? L2 the function
?n
(
)
log det Ui? (S1 ? S2 )?1 Ui ? log det(I + S1 ? S2 ),
g? : (S1 , S2 ) ? n1
i=1
fails to be convex, precluding an easy generalization. Nevertheless, for fixed S1 or S2 the functions
{
{
f1 : S1 7? f (S1 ? S2 )
f2 : S2 ? f (S1 ? S2 )
,
g1 : S1 7? g(S1 ? S2 )
g2 : S2 ? g(S1 ? S2 )
are once again concave or convex. Indeed, the map ? : S1 ? S1 ? S2 is linear and f is concave,
and f1 = f ? ? is also concave; similarly, f2 is seen to be concave and g1 and g2 are convex. Hence,
by generalizing the arguments of [29, Thm. 2] to our ?block-coordinate? setting, updating via
(
)
(
)
?fi Si (k+1) = ??gi Si (k) , for i = 1, 2,
(7)
should increase the log-likelihood ? at each iteration. We prove below that this is indeed the case,
and that updating as per (7) ensure positive definiteness of the iterates as well as monotonic ascent.
3
3.1.1
Positive definite iterates and ascent
In order to show the positive definiteness of the solutions to (7), we first derive their closed form.
Proposition 3.1 (Positive definite iterates). For S1 ? 0, S2 ? 0, the solutions to (7) are given by
the following expressions:
?f1 (X) = ??g1 (S1 ) ?? X ?1 = Tr1 ((I ? S2 )(L + L?L)) /N2
?f2 (X) = ??g2 (S2 ) ?? X ?1 = Tr2 ((S1 ? I)(L + L?L)) /N1 .
Moreover, these solutions are positive definite.
Proof. The details are somewhat technical, and are hence given in Appendix A. We know that
L ? 0 =? L + L?L ? 0, because L ? L(I + L)?1 L ? 0. Since the partial trace operators are
positive (Prop. 2.4), it follows that the solutions to (7) are also positive definite.
We are now ready to establish that these updates ensure monotonic ascent in the log-likelihood.
(0)
(0)
Theorem 3.2 (Ascent). Starting with L1 ? 0, L2 ? 0, updating according to (7) generates
{ ( (k)
(k)
(k)
(k) )}
positive definite iterates L1 and L2 , and the sequence ? L1 ? L2
is non-decreasing.
k?0
Proof. Updating according to (7) generates positive definite matrices Si , and hence positive definite
subkernels Li = Si . Moreover, due to the convexity of g1 and concavity of f1 , for matrices A, B ? 0
f1 (B) ? f1 (A) + ?f1 (A)? (B ? A),
g1 (A) ? g1 (B) + ?g1 (B)? (A ? B).
Hence, f1 (A) + g1 (A) ? f1 (B) + g1 (B) + (?f1 (A) + ?g1 (B))? (A ? B).
(k)
(k+1)
(k+1)
(k)
verify (7), by setting A = S1
and B = S1 we have
Thus, if S1 , S1
( (k+1) )
( (k+1) )
( (k) )
( (k) )
( (k)
( (k+1)
(k) )
(k) )
+ g 1 S1
? f1 S1 + g1 S1
= ? L1 ? L2 .
? L1
? L2 = f1 S1
The same reasoning holds for L2 , which proves the theorem.
As Tr1 ((I ? S2 )L) = N2 L1 (and similarly for L2 ), updating as in (7) is equivalent to updating
(
)
(
)
L1 ? L1 + Tr1 (I ? L?1
L2 ? L2 + Tr2 (L?1
2 )(L?L) /N2 ,
1 ? I)(L?L) /N1 .
Generalization. We can generalize the updates to take an additional step-size parameter a:
(
)
(
)
L1 ? L1 + a Tr1 (I ? L?1
L2 ? L2 + a Tr2 (L?1
2 )(L?L) /N2 ,
1 ? I)(L?L) /N1 .
Experimentally, a > 1 (as long as the updates remain positive definite) can provide faster convergence, although the monotonicity of the log-likelihood is no longer guaranteed. We found experimentally that the range of admissible a is larger than for Picard, but decreases as N grows larger.
The arguments above easily generalize to the multiblock case. Thus, when learning L = L1 ? ? ? ? ?
Lm , by writing Eij the matrix with a 1 in position (i, j) and zeros elsewhere, we update Lk as
(Lk )ij ? (Lk )ij + Nk /(N1 . . . Nm ) Tr [(L1 ? . . . ? Lk?1 ? Eij ? Lk+1 ? . . . ? Lm )(L?L)] .
From the above updates it is not transparent whether the Kronecker product saves us any computation. In particular, it is not clear whether the updates can be implemented to run faster than O(N 3 ).
We show below in the next section how to implement these updates efficiently.
3.1.2
Algorithm and complexity analysis
From Theorem 3.2, we obtain Algorithm 1 (which is different from the Picard iteration in [25],
because it operates alternatingly on each subkernel). It is important to note that a further speedup
to Algorithm 1 can be obtained by performing stochastic updates, i.e., instead of computing the
full gradient of the log-likelihood, we perform our updates using only one (or a small minibatch)
subset Yi at each step instead of iterating over the entire training set; this uses the stochastic gradient
?
?1
? = Ui L?1
. The crucial strength of Algorithm 1 lies in the following result:
Yi Ui ? (I + L)
4
Algorithm 1 K R K-P ICARD iteration
Input: Matrices L1 , L2 , training set T , parameter a.
for i = 1 to maxIter do
(
)
L1 ? L1 + a Tr1 (I ? L?1
)(L?L) /N2
// or update stochastically
( ?1 2
)
L2 ? L2 + a Tr2 (L1 ? I)(L?L) /N1
// or update stochastically
end for
return (L1 , L2 )
?
Theorem 3.3 (Complexity). For N1 ? N2 ? N , the updates in Algorithm 1 can be computed in
O(n?3 +N 2 ) time and O(N 2 ) space, where ? is the size of the largest training subset. Furthermore,
stochastic updates can be computed in O(N ?2 + N 3/2 ) time and O(N + ?2 ) space.
Indeed, by leveraging the properties of the Kronecker product, the updates
can be obtained without
?
?
?1
computing L?L. This result is non-trivial: the components of ?, n1 i Ui L?1
,
Yi Ui and (I + L)
must be considered separately for computational efficiency. The proof is provided in App. B. However, it seems that considering more than 2 subkernels does not lead to further speed-ups.
This is a marked improvement over [25], which runs in O(N 2 ) space and O(n?3 + N 3 ) time (nonstochastic) or O(N 3 ) time (stochastic); Algorithm 1 also provides faster stochastic updates than [9]1 .
However, one may wonder if by learning the sub-kernels by alternating updates the log-likelihood
converges to a sub-optimal limit. The next section discusses how to jointly update L1 and L2 .
3.2
Joint updates
We also analyzed the possibility of updating L1 and L2 jointly: we update L ? L + L?L and then
recover the Kronecker structure of the kernel by defining the updates L?1 and L?2 such that:
{ ? ?
(L1 , L2 ) minimizes ?L + L?L ? L?1 ? L?2 ?2F
(8)
L?1 ? 0, L?2 ? 0, ?L?1 ? = ?L?2 ?
We show in appendix C that such solutions exist and can be computed from the first singular value
[
]N1
and vectors of the matrix R = vec((L?1 + ?)(ij) )? i,j=1 . Note however that in this case, there is
no guaranteed increase in log-likelihood. The pseudocode for the related algorithm (J OINT-P ICARD)
is given in appendix C.1. An analysis similar to the proof of Thm. 3.3 shows that the updates can be
obtained O(n?3 + max(N1 , N2 )4 ).
3.3
Memory-time trade-off
Although K RON D PPS have tractable learning algorithms,
the memory requirements remain high for
?
2
?
non-stochastic updates, as the matrix ? = n1 i Ui L?1
U
Yi i needs to be stored, requiring O(N )
memory. However, if the training set can be subdivided such that
s.t. ?k, |?Y ?Sk Y | < z,
(9)
{Y1 , . . . , Yn } = ?m
k=1 Sk
?m
?
?1 ?
1
? can be decomposed as n k=1 ?k with ?k = Yi ?Sk Ui LYi Ui . Due to the bound in Eq. 9,
each ?k will be sparse, with only z 2 non-zero coefficients. We can then store each ?k with minimal
storage and update L1 and L2 in O(n?3 + mz 2 + N 3/2 ) time and O(mz 2 + N ) space.
Determining the existence of such a partition of size m is a variant of the NP-Hard Subset-Union
Knapsack Problem (SUKP) [11] with m knapsacks and where the value of each item (i.e. each Yi )
is equal to 1: a solution to SUKP of value n with m knapsacks is equivalent to a solution to Eq. 9.
However, an approximate partition can also be simply constructed via a greedy algorithm.
4
Sampling
Sampling exactly (see Alg. 2 and [17]) from a full DPP kernel costs O(N 3 + N k 3 ) where k is the
size of the sampled subset. The bulk of the computation lies in the initial eigendecomposition of L;
1
For example, computing matrix B in [9] (defined after Eq. 7), which is a necessary step for (stochastic)
gradient ascent, costs O(N 2 ) due to matrix multiplications.
5
the k orthonormalizations cost O(N k 3 ). Although the eigendecomposition need only happen once
for many iterations of sampling, exact sampling is nonetheless intractable in practice for large N .
Algorithm 2 Sampling from a DPP kernel L
Input: Matrix L.
Eigendecompose L as {(?i , vi )}1?i?N .
J ??
for i = 1 to N do
J ? J ? {i} with probability ?i /(?i + 1).
end for
V ? {vi }i?J , Y ? ?
while |V | > 0 do
?
Sample i from {1 . . . N } with probability |V1 | v?V vi2
Y ? Y ? {i}, V ? V? , where V? is an orthonormal basis of the subspace of V orthonormal to ei
end while
return Y
3
3
It follows from Prop. 2.2 that for K RON D PPS, the eigenvalues ?
?i can be obtained in O(N1 + N2 ),
and the k eigenvectors in O(kN ) operations. For N1 ? N2 ? N , exact sampling thus only costs
O(N 3/2 + N k 3 ). If L = L1 ? L2 ? L3 , the same reasoning shows that exact sampling becomes
linear in N , only requiring O(N k 3 ) operations.
One can also resort to MCMC sampling; for instance such a sampler was considered in [13] (though
with an incorrect mixing time analysis). The results of [20] hold only for k-DPPs, but suggest
their MCMC sampler may possibly take O(N 2 log(N/?)) time for full DPPs, which is impractical.
Nevertheless if one develops faster MCMC samplers, they should also be able to profit from the
Kronecker product structure offered by K RON D PP.
5
Experimental results
In order to validate our learning algorithm, we compared K R K-P ICARD to J OINT-P ICARD and to
the Picard iteration (P ICARD) on multiple real and synthetic datasets.2
5.1
Synthetic tests
To enable a fair comparison between algorithms, we test them on synthetic data drawn from a full
(non-Kronecker) ground-truth DPP kernel.?The sub-kernels were initialized by Li = X ? X, with
X?s coefficients drawn uniformly from [0, 2]; for P ICARD, L was initialized with L1 ? L2 .
For Figures 1a and 1b, training data was generated by sampling 100 subsets from the true kernel
with sizes uniformly distributed between 10 and 190.
.
.
P ICARD
.
.
J OINT-P ICARD
?10
.
Normalized log-likelihood
3
0
.
K R K-P ICARD
.
?10
0
0
?2
?105
.
?1
?2
?4
?2
?6
?8
K R K-P ICARD .(stochastic)
4
?4
.
0
100
200
.
0
200
time (s)
(a) N1 = N2 = 50
400
time (s)
(b) N1 = N2 = 100
600
?3
.
0
20
40
60
80
time (s)
(c) N1 = 100, N2 = 500
Figure 1: a = 1; the thin dotted lines indicated the standard deviation from the mean.
2
All experiments were repeated 5 times and averaged, using MATLAB on a Linux Mint system with 16GB
of RAM and an i7-4710HQ CPU @ 2.50GHz.
6
To evaluate K R K-P ICARD on matrices too large to fit in memory and with large ?, we drew samples
from a 50 ? 103 ?50 ? 103 kernel of rank 1, 000 (on average |Yi | ? 1, 000), and learned the kernel
stochastically (only K R K-P ICARD could be run due to the memory requirements of other methods);
the likelihood drastically improves in only two steps (Fig.1c).
As shown in Figures 1a and 1b, K R K-P ICARD converges significantly faster than P ICARD, especially for large values of N . However, although J OINT-P ICARD also increases the log-likelihood
at each iteration, it converges much slower and has a high standard deviation, whereas the standard
deviations for P ICARD and K R K-P ICARD are barely noticeable. For these reasons, we drop the
comparison to J OINT-P ICARD in the subsequent experiments.
5.2
Small-scale real data: baby registries
We compared K R K-P ICARD to P ICARD and EM [10] on the baby registry dataset (described indepth in [10]), which has also been used to evaluate other DPP learning algorithms [9, 10, 25]. The
dataset contains 17 categories of baby-related products obtained from Amazon. We learned kernels
for the 6 largest categories (N = 100); in this case, P ICARD is sufficiently efficient to be prefered
to K R K-P ICARD; this comparison serves only to evaluate the quality of the final kernel estimates.
The initial marginal kernel K for EM was sampled from a Wishart distribution with N degrees of
freedom and an identity covariance matrix, then scaled by 1/N ; for P ICARD, L was set to K(I ?
K)?1 ; for K R K-P ICARD, L1 and L2 were chosen (as in J OINT-P ICARD) by minimizing ?L ?
L1 ? L2 ?. Convergence was determined when the objective change dipped below a threshold ?. As
one EM iteration takes longer than one Picard iteration but increases the likelihood more, we set
?P IC = ?K R K = 10?4 and ?EM = 10?5 .
The final log-likelihoods are shown in Table 1; we set the step-sizes to their largest possible values,
i.e. aP IC = 1.3 and aK R K = 1.8. Table 1 shows that K R K-P ICARD obtains comparable, albeit
slightly worse log-likelihoods than P ICARD and EM, which confirms that for tractable N , the better
modeling capability of full kernels make them preferable to K RON D PPS.
Table 1: Final log-likelihoods for each large category of the baby registries dataset
(a) Training set
5.3
(b) Test set
Category
EM
P ICARD
K R K-P ICARD
Category
EM
P ICARD
K R K-P ICARD
apparel
bath
bedding
diaper
feeding
gear
-10.1
-8.6
-8.7
-10.5
-12.1
-9.3
-10.2
-8.8
-8.8
-10.7
-12.1
-9.3
-10.7
-9.1
-9.3
-11.1
-12.5
-9.6
apparel
bath
bedding
diaper
feeding
gear
-10.1
-8.6
-8.7
-10.6
-12.2
-9.2
-10.2
-8.8
-8.8
-10.7
-12.2
-9.2
-10.7
-9.1
-9.3
-11.2
-12.6
-9.5
Large-scale real dataset: GENES
Finally, to evaluate K R K-P ICARD on large matrices of real-world data, we train it on data from the
GENES [3] dataset (which has also been used to evaluate DPPs in [3, 19]). This dataset consists in
10,000 genes, each represented by 331 features corresponding to the distance of a gene to hubs in
the BioGRID gene interaction network.
We construct a ground truth Gaussian DPP kernel on the GENES dataset and use it to obtain 100
training samples with sizes uniformly distributed between 50 and 200 items. Similarly to the synthetic experiments, we initialized K R K-P ICARD?s kernel by setting Li = Xi? Xi where Xi is a
random matrix of size N1 ? N1 ; for P ICARD, we set the initial kernel L = L1 ? L2 .
Figure 2 shows the performance of both algorithms. As with the synthetic experiments, K R KP ICARD converges much faster; stochastic updates increase its performance even more, as shown in
Fig. 2b. Average runtimes and speed-up are given in Table 2: K R K-P ICARD runs almost an order of
magnitude faster than P ICARD, and stochastic updates are more than two orders of magnitude faster,
while providing slightly larger initial increases to the log-likelihood.
7
Normalized log-likelihood
.
0
.
P ICARD
.
.
K R K-P ICARD
?103
.
0
?10
?10
?20
?20
?30
?30
?40 .
0
100
200
time (s)
K R K-P ICARD .(stochastic)
?103
?40 .
0
300
(a) Non-stochastic learning
.
50
100
time (s)
(b) Stochastic vs. non-stochastic
Figure 2: n = 150, a = 1.
Table 2: Average runtime and performance on the GENES dataset for N1 = N2 = 100
Average runtime
NLL increase (1st iter.)
6
P ICARD
K R K-P ICARD
K R K-P ICARD (stochastic)
161.5 ? 17.7 s
(2.81 ? 0.03) ? 104
8.9 ? 0.2 s
(2.96 ? 0.02) ? 104
1.2 ? 0.02 s
(3.13 ? 0.04) ? 104
Conclusion and future work
We introduced K RON D PPS, a variant of DPPs with kernels structured as the Kronecker product of m
smaller matrices, and showed that typical operations over DPPs such as sampling and learning the
kernel from data can be made efficient for K RON D PPS on previously untractable ground set sizes.
By carefully leveraging the properties of the Kronecker product, we derived for m = 2 a lowcomplexity algorithm to learn the kernel from data which guarantees positive iterates and a monotonic increase of the log-likelihood, and runs in O(n?3 + N 2 ) time. This algorithm provides even
more significant speed-ups and memory gains in the stochastic case, requiring only O(N 3/2 + N ?2 )
time and O(N + ?2 ) space. Experiments on synthetic and real data showed that K RON D PPS can be
learned efficiently on sets large enough that L does not fit in memory.
Our experiments showed that K RON D PP?s reduced number of parameters (compared to full kernels)
did not impact its performance noticeably. However, a more in-depth investigation of its expressivity
may be valuable for future study. Similarly, a deeper study of initialization procedures for DPP
learning algorithms, including K R K-P ICARD, is an important question.
While discussing learning the kernel, we showed that L1 and L2 cannot be updated simultaneously
in a CCCP-style iteration since g is not convex over (S1 , S2 ). However, it can be shown that g is
geodesically convex over the Riemannian manifold of positive definite matrices, which suggests that
deriving an iteration which would take advantage of the intrinsic geometry of the problem may be a
viable line of future work.
K RON D PPS also enable fast sampling, in O(N 3/2 + N k 3 ) operations when using two sub-kernels,
and in O(N k 3 ) when using three sub-kernels. This speedup allows for exact sampling at comparable
or even better costs than previous algorithms for approximate sampling. However, the subset size k
is still limiting, due to the O(N k 3 ) cost of sampling and learning. A key aspect of future work on
obtaining truly scalable DPPs is to overcome this computational bottleneck.
Acknowledgements
SS acknowledges partial support from NSF grant IIS-1409802.
8
References
[1] R. Affandi, A. Kulesza, E. Fox, and B. Taskar. Nystr?m approximation for large-scale Determinantal
Point Processes. In Artificial Intelligence and Statistics (AISTATS), 2013.
[2] R. Affandi, E. Fox, R. Adams, and B. Taskar. Learning the parameters of Determinantal Point Process
kernels. In ICML, 2014.
[3] N. K. Batmanghelich, G. Quon, A. Kulesza, M. Kellis, P. Golland, and L. Bornn. Diversifying sparsity
using variational determinantal point processes. arXiv:1411.6307, 2014.
[4] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[5] A. Borodin. Determinantal point processes. arXiv:0911.1153, 2009.
[6] W. Chao, B. Gong, K. Grauman, and F. Sha. Large-margin determinantal point processes. In Uncertainty
in Artificial Intelligence (UAI), 2015.
[7] L. Decreusefond, I. Flint, N. Privault, and G. L. Torrisi. Determinantal point processes, 2015.
[8] S. Flaxman, A. Wilson, D. Neill, H. Nickisch, and A. Smola. Fast Kronecker inference in Gaussian
processes with non-Gaussian likelihoods. In ICML, pages 607?616, 2015.
[9] M. Gartrell, U. Paquet, and N. Koenigstein. Low-rank factorization of determinantal point processes for
recommendation. arXiv:1602.05436, 2016.
[10] J. Gillenwater, A. Kulesza, E. Fox, and B. Taskar. Expectation-Maximization for learning Determinantal
Point Processes. In NIPS, 2014.
[11] O. Goldschmidt, D. Nehme, and G. Yu. Note: On the set-union knapsack problem. Naval Research
Logistics, 41:833?842, 1994.
[12] J. B. Hough, M. Krishnapur, Y. Peres, and B. Vir?g. Determinantal processes and independence. Probability Surveys, 3(206?229):9, 2006.
[13] B. Kang. Fast determinantal point process sampling with application to clustering. In Advances in Neural
Information Processing Systems 26, pages 2319?2327. Curran Associates, Inc., 2013.
[14] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: theory,
efficient algorithms and empirical studies. JMLR, 9:235?284, 2008.
[15] A. Kulesza. Learning with Determinantal Point Processes. PhD thesis, University of Pennsylvania, 2013.
[16] A. Kulesza and B. Taskar. k-DPPs: Fixed-size Determinantal Point Processes. In ICML, 2011.
[17] A. Kulesza and B. Taskar. Determinantal Point Processes for machine learning, volume 5. Foundations
and Trends in Machine Learning, 2012.
[18] F. Lavancier, J. M?ller, and E. Rubak. Determinantal point process models and statistical inference.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(4):853?877, 2015.
[19] C. Li, S. Jegelka, and S. Sra. Efficient sampling for k-determinantal point processes. arXiv:1509.01618,
2015.
[20] C. Li, S. Jegelka, and S. Sra. Fast DPP sampling for Nystr?m with application to kernel methods.
arXiv:1603.06052, 2016.
[21] H. Lin and J. Bilmes. Learning mixtures of submodular shells with application to document summarization. In Uncertainty in Artificial Intelligence (UAI), 2012.
[22] C. V. Loan and N. Pitsianis. Approximation with kronecker products. In Linear Algebra for Large Scale
and Real Time Applications, pages 293?314. Kluwer Publications, 1993.
[23] R. Lyons. Determinantal probability measures. Publications Math?matiques de l?Institut des Hautes
?tudes Scientifiques, 98(1):167?212, 2003.
[24] O. Macchi. The coincidence approach to stochastic point processes. Adv. Appl. Prob., 7(1), 1975.
[25] Z. Mariet and S. Sra. Fixed-point algorithms for learning determinantal point processes. In ICML, 2015.
[26] Z. Mariet and S. Sra. Diversity networks. Int. Conf. on Learning Representations (ICLR), 2016. URL
arXiv:1511.05077.
[27] J. Martens and R. B. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature.
In ICML, 2015.
[28] G. Wu, Z. Zhang, and E. Y. Chang. Kronecker factorization for speeding up kernel machines. In SIAM
Data Mining (SDM), pages 611?615, 2005.
[29] A. L. Yuille and A. Rangarajan. The concave-convex procedure (cccp). In Advances in Neural Information
Processing Systems 14, pages 1033?1040. MIT Press, 2002.
[30] X. Zhang, F. X. Yu, R. Guo, S. Kumar, S. Wang, and S.-F. Chang. Fast orthogonal projection based on
kronecker product. In ICCV, 2015.
[31] T. Zhou, Z. Kuscsik, J.-G. Liu, M. Medo, J. R. Wakeling, and Y.-C. Zhang. Solving the apparent diversityaccuracy dilemma of recommender systems. PNAS, 107(10):4511?4515, 2010.
9
| 6296 |@word determinant:2 polynomial:1 seems:1 confirms:1 prominence:1 decomposition:3 covariance:1 profit:1 nystr:3 tr:4 initial:4 liu:1 contains:1 series:1 document:2 precluding:1 ka:3 si:4 assigning:1 bd:2 must:1 determinantal:20 realistic:1 partition:2 happen:1 subsequent:1 enables:3 drop:1 update:25 v:1 greedy:1 intelligence:3 item:5 gear:2 core:2 short:1 caveat:1 iterates:5 provides:2 math:1 ron:21 scientifiques:1 zhang:3 unacceptable:1 along:1 direct:1 become:1 constructed:1 viable:1 multiblock:1 prove:1 incorrect:1 consists:1 fullyconnected:1 introduce:1 indeed:4 roughly:1 mechanic:1 inspired:1 chap:1 decreasing:1 decomposed:1 cpu:1 lyon:1 cardinality:1 considering:1 becomes:1 spain:1 begin:1 notation:1 suffice:1 moreover:4 provided:1 maxiter:1 what:2 string:1 eigenvector:1 minimizes:1 impractical:1 guarantee:2 concave:7 runtime:2 exactly:1 preferable:1 scaled:1 grauman:1 vir:1 ly:2 omit:1 yn:2 grant:1 positive:18 limit:2 consequence:1 despite:1 ak:1 ap:1 initialization:1 suggests:1 challenging:1 appl:1 ease:1 limited:1 factorization:2 range:1 adoption:1 averaged:1 union:2 block:4 definite:13 implement:2 practice:1 procedure:5 area:1 empirical:1 submatrices:1 significantly:2 projection:1 ups:2 suggest:1 cannot:2 operator:4 storage:1 seminal:1 writing:2 optimize:1 equivalent:2 map:1 marten:1 attention:1 starting:1 convex:8 survey:1 impeded:1 amazon:1 immediately:1 assigns:1 factored:3 q:1 importantly:1 orthonormal:2 deriving:1 handle:1 notion:1 coordinate:2 updated:1 limiting:1 suppose:1 exact:8 us:4 curran:1 pa:4 element:1 associate:1 trend:1 pps:8 updating:8 observed:1 taskar:5 coincidence:1 wang:1 adv:1 decrease:1 trade:1 mz:2 valuable:2 prefered:1 convexity:1 complexity:3 ui:15 motivate:1 singh:1 rewrite:1 solving:2 algebra:1 yuille:1 dilemma:1 f2:3 efficiency:1 basis:1 easily:3 joint:1 represented:1 train:1 fast:7 effective:1 describe:1 kp:1 artificial:3 bhatia:1 whose:3 apparent:1 larger:4 s:1 statistic:1 gi:1 g1:11 paquet:1 jointly:2 final:3 nll:1 advantage:2 sequence:2 rr:1 eigenvalue:1 sdm:1 interaction:1 product:16 bath:2 oint:6 mixing:1 validate:1 scalability:2 convergence:2 empty:1 requirement:2 rangarajan:1 adam:1 converges:4 object:1 koenigstein:1 derive:2 develop:1 ac:2 gong:1 ij:5 noticeable:1 received:1 progress:1 eq:4 solves:1 implemented:3 indicate:1 stochastic:19 enable:3 apparel:2 noticeably:1 require:1 subdivided:1 feeding:2 assign:1 f1:12 generalization:2 transparent:1 preliminary:1 investigation:1 proposition:3 hold:2 sufficiently:1 considered:2 ground:6 ic:2 visualize:1 lm:3 arrange:1 combinatorial:1 gartrell:1 largest:3 tudes:1 mit:3 clearly:1 sensor:2 gaussian:4 aim:1 zhou:1 wilson:1 publication:2 corollary:2 derived:2 naval:1 improvement:1 rank:5 likelihood:18 contrast:1 geodesically:1 inference:2 typically:2 entire:1 arg:1 special:1 marginal:3 equal:1 once:2 construct:1 lyi:3 sampling:29 runtimes:1 yu:2 icml:5 thin:1 future:4 np:2 develops:1 simultaneously:1 geometry:1 antecedent:1 n1:22 recalling:1 freedom:1 interest:1 possibility:1 mining:1 picard:6 analyzed:1 truly:1 mixture:1 partial:6 necessary:1 institut:1 fox:3 indexed:1 orthogonal:1 hough:1 initialized:3 re:1 accrue:1 theoretical:1 minimal:1 kij:1 instance:1 modeling:1 maximization:1 applicability:1 introducing:1 tractability:1 subset:13 deviation:3 diaper:2 cost:11 wonder:1 too:1 stored:1 kn:1 synthetic:6 nickisch:1 st:1 siam:1 csail:1 probabilistic:1 off:1 invertible:1 tr2:9 concrete:1 linux:1 again:1 thesis:1 nm:1 rn1:2 tr1:10 possibly:1 positivity:1 wishart:1 worse:1 admit:1 stochastically:3 resort:1 conf:1 style:1 return:2 li:6 diversity:2 de:2 rn2:1 coefficient:2 inc:1 int:1 vi:2 closed:1 recover:1 capability:1 nugget:1 contribution:2 characteristic:1 efficiently:3 who:1 largely:1 generalize:2 bilmes:1 alternatingly:1 app:1 definition:2 nonetheless:1 pp:13 elegance:1 proof:6 riemannian:1 sampled:2 gain:1 dataset:8 flint:1 massachusetts:2 popular:1 recall:1 improves:1 carefully:1 methodology:1 specify:1 done:1 though:2 furthermore:2 smola:1 ei:1 widespread:1 minibatch:1 quality:2 perhaps:1 indicated:1 grows:1 verify:1 requiring:3 true:1 normalized:2 hence:4 alternating:1 nonzero:1 please:1 l1:30 reasoning:2 variational:1 consideration:1 matiques:1 recently:2 fi:1 bornn:1 garnered:1 pseudocode:1 exponentially:1 volume:1 extend:1 diversifying:1 kluwer:1 significant:2 refer:1 cambridge:2 vec:1 dpps:17 similarly:4 gillenwater:1 submodular:1 l3:1 longer:2 curvature:1 recent:1 showed:4 conjectured:1 optimizing:1 mint:1 manipulation:1 store:1 nonconvex:2 suvrit:2 discussing:1 baby:4 yi:11 seen:2 guestrin:1 greater:1 somewhat:1 additional:1 ller:1 monotonically:2 ii:4 multiple:2 full:6 pnas:1 technical:1 faster:8 long:1 retrieval:1 lin:1 cccp:2 permitting:2 ensuring:1 impact:1 variant:2 basic:3 scalable:1 expectation:1 arxiv:6 iteration:15 kernel:44 normalization:1 golland:1 whereas:1 separately:1 krause:1 singular:1 crucial:1 unlike:1 ascent:6 elegant:1 db:2 contrary:1 leveraging:3 call:1 near:1 iii:2 easy:4 enough:1 variety:1 marginalization:1 fit:2 independence:1 nonstochastic:1 restrict:2 pennsylvania:1 impediment:1 registry:3 idea:2 det:11 i7:1 bottleneck:1 whether:2 motivated:1 expression:1 gb:1 url:1 action:1 matlab:1 iterating:1 clear:2 eigenvectors:1 amount:1 locally:1 category:5 reduced:1 exist:1 nsf:1 dotted:1 per:3 bulk:1 diverse:1 discrete:2 write:1 key:4 iter:1 nevertheless:2 pb:4 threshold:1 drawn:3 nonconvexity:1 v1:1 ram:1 run:6 inverse:1 prob:1 uncertainty:2 almost:1 wu:1 appendix:3 scaling:1 comparable:2 submatrix:2 layer:1 bound:1 zelda:2 guaranteed:4 neill:1 nontrivial:1 strength:1 placement:2 kronecker:26 constraint:3 generates:3 aspect:1 speed:3 argument:3 kumar:1 performing:1 conjecture:1 speedup:3 structured:1 according:2 inroad:1 lowcomplexity:2 smaller:3 remain:2 em:7 slightly:2 s1:24 mariet:3 iccv:1 pr:1 macchi:1 alluded:1 previously:1 diagonalizes:1 turn:2 discus:1 needed:1 know:1 tractable:3 end:3 serf:1 operation:5 apply:1 save:1 batch:2 alternative:1 slower:1 rp:1 existence:1 knapsack:4 compress:1 clustering:2 running:1 ensure:2 prof:1 establish:1 especially:1 society:1 kellis:1 tensor:3 objective:2 question:1 sha:1 dependence:1 gradient:4 iclr:1 subspace:1 hq:1 unable:1 distance:1 parametrized:2 manifold:1 trivial:2 barely:1 reason:1 assuming:1 providing:1 balance:1 minimizing:1 difficult:3 expense:1 trace:5 summarization:2 perform:1 recommender:2 datasets:1 logistics:1 defining:1 peres:1 y1:2 arbitrary:1 thm:3 introduced:1 pair:1 required:2 extensive:1 learned:3 bedding:2 kang:1 expressivity:1 barcelona:1 nip:2 able:2 beyond:1 below:3 borodin:1 kulesza:6 sparsity:1 challenge:1 max:2 memory:7 video:1 vi2:1 including:1 royal:1 rely:1 indicator:1 technology:2 brief:1 numerous:1 lk:5 ready:1 acknowledges:1 lavancier:1 flaxman:1 speeding:2 chao:1 l2:27 acknowledgement:1 multiplication:1 determining:1 expect:1 interesting:1 eigendecomposition:2 foundation:1 degree:1 offered:1 jegelka:2 imposes:1 elsewhere:1 surprisingly:1 aij:2 drastically:1 deeper:1 institute:2 affandi:2 sparse:1 benefit:1 distributed:2 dpp:25 overcome:2 ghz:1 valid:1 world:1 depth:1 concavity:1 author:1 made:2 preprocessing:1 quon:1 approximate:6 obtains:1 ameliorated:1 gene:7 monotonicity:1 uai:2 krishnapur:1 xi:3 alternatively:1 eigendecompose:1 iterative:1 decomposes:1 sk:3 table:5 learn:2 sra:5 obtaining:1 alg:1 da:2 did:1 aistats:1 main:1 s2:16 n2:16 fair:1 repeated:1 fig:2 definiteness:2 grosse:1 sub:6 position:2 fails:1 lie:3 untractable:1 jmlr:1 admissible:1 theorem:4 specific:1 hub:1 appeal:1 burden:1 exists:1 intractable:1 albeit:1 intrinsic:1 gained:1 drew:1 phd:1 magnitude:2 execution:1 margin:1 nk:1 generalizing:1 eij:2 simply:1 rubak:1 partially:1 g2:3 recommendation:1 chang:2 monotonic:4 truth:2 satisfies:1 ma:2 prop:3 shell:1 sized:1 marked:1 identity:1 exposition:1 towards:1 hard:2 experimentally:2 change:1 specifically:1 determined:1 lui:1 operates:1 sampler:5 uniformly:3 typical:1 loan:1 experimental:1 hautes:1 support:1 maximumlikelihood:1 guo:1 arises:1 brevity:1 evaluate:5 mcmc:4 princeton:1 |
5,854 | 6,297 | Consistent Estimation of Functions of Data Missing
Non-Monotonically and Not at Random
Ilya Shpitser
Department of Computer Science
Johns Hopkins University
[email protected]
Abstract
Missing records are a perennial problem in analysis of complex data of all types,
when the target of inference is some function of the full data law. In simple cases,
where data is missing at random or completely at random [15], well-known adjustments exist that result in consistent estimators of target quantities.
Assumptions underlying these estimators are generally not realistic in practical
missing data problems. Unfortunately, consistent estimators in more complex
cases where data is missing not at random, and where no ordering on variables
induces monotonicity of missingness status are not known in general, with some
notable exceptions [13, 18, 16].
In this paper, we propose a general class of consistent estimators for cases where
data is missing not at random, and missingness status is non-monotonic. Our estimators, which are generalized inverse probability weighting estimators, make
no assumptions on the underlying full data law, but instead place independence
restrictions, and certain other fairly mild assumptions, on the distribution of missingness status conditional on the data.
The assumptions we place on the distribution of missingness status conditional on
the data can be viewed as a version of a conditional Markov random field (MRF)
corresponding to a chain graph. Assumptions embedded in our model permit
identification from the observed data law, and admit a natural fitting procedure
based on the pseudo likelihood approach of [2]. We illustrate our approach with
a simple simulation study, and an analysis of risk of premature birth in women in
Botswana exposed to highly active anti-retroviral therapy.
1
Introduction
Practical data sets generally have missing or corrupted entries. A classical missing data problem is
to find a way to make valid inferences about the full data law. In other words, the goal is to exploit
assumptions on the mechanism which is responsible for missingness or corruption of data records
to transform the problem into another where missingness or corruption were not present at all.
In simple cases, where missingness status is assumed to be missing completely at random (determined by an independent coin flip), or at random (determined by a coin flip independent conditional
on observed data records), adjustments exist which result in consistent estimators of many functions
of the full data law. Unfortunately, these cases are difficult to justify in practice. Often, data records
are missing intermittently and in complex patterns that do not conform to above assumptions. For
instance, in longitudinal observational studies in medicine, patients may elect to not show up at a
particular time point, for reasons having to do with their (by definition missing) health status at that
time point, and then later return for followup.
1
In this situation, missingness is not determined by a coin flip independent of missing data conditional
on the observed data (data is missing not at random), and missingness status of a patient is not
monotonic under any natural ordering. In this setting, deriving consistent estimators of even simple
functions of the full data law is a challenging problem [13, 18, 16].
In this paper we propose a new class of consistent generalized inverse probability weighting (IPW)
estimators for settings where data is missing non-monotonically and not at random. Like other IPW
estimators, ours makes no modeling assumptions on the full data law, and only models the joint
missingness status of all variables, conditional on those variables. This model can be viewed as
a conditional Markov random field (MRF) with independence assumptions akin to those made in
factors of a chain graph model [6]. The assumptions encoded in our model permit identification of
the full data law, and allow estimation based on the pseudo likelihood approach of [2].
Our paper is organized as follows. We discuss relevant preliminaries on graphical models in Section
2. We fix additional notation and discuss some prior work on missing data in Section 3. We introduce
our missingness model, and identification results based on it in Section 4, and discuss estimation in
Section 5. We illustrate the use of our model with a simple simulation study in Section 6, and give
a data analysis application in Section 7. Finally, we illustrate the difference between our model and
a seemingly similar non-identified model via a parameter counting argument in Section 8, and give
our conclusions in Section 9.
2
Chain Graph Models
We briefly review statistical chain graph models. A simple mixed graph is a graph where every
vertex pair is connected by at most one edge, and there are two types of possible edges: directed and
undirected. Chain graphs are mixed graphs with the property that for every edge cycle in the graph,
there is no way to assign an orientation to undirected edges in any cycle to form a directed cycle [6].
For a graph G with a vertex set V, and any subset A ? V, define the induced subgraph GA to be
a graph with the vertex set A and all edges in G between elements in A. Given a graph G, define
the augmented or moral graph G a to be an undirected graph obtained from adding a new undirected
edge between any unconnected vertices W1 , W2 if a path of the form W1 ? ? ? ? . . . ? ?? ? W2
exists in G (note the path may only contain a single intermediate vertex), and then replacing all
directed edges in G by undirected edges.
A clique in an undirected graph is a set of vertices where any pair of vertices are neighbors. A
maximal clique is a clique such that no superset of it forms a clique. Given an undirected graph G,
denote the set of maximal cliques in G by C(G). A block in a simple mixed graph G is any connected
component formed by undirected edges in a graph obtained from G dropping all directed edges.
Given a simple mixed graph G, denote the set of blocks in G by B(G).
A chain graph model is defined by the following factorization
Y
p(V) =
p(B | paG (B)),
(1)
B?B(G)
where for each B,
p(B | paG (B)) =
1
Z(paG (B))
Y
?C (C),
(2)
C?C((GB?paG (B) )a )
and ?C (C) are called potential functions and map value configurations of variables in C to real
numbers, which are meant to denote an ?affinity? of the model towards that particular value configuration. The chain graph factorization implies Markov properties, described in detail in [6].
3
Preliminaries, and Prior Work on Missing Data
We will consider data sets on random variables L ? L1 , . . . Lk , drawn from a full data law p(L).
Associated with each random variable Li is a binary missingness indicator Ri , where Li is observed
if and only if Ri = 1. Define a vector (lj , rj ) ? (l1j , . . . lkj , r1j , . . . rkj ) to be the jth realization of
2
p(L, R). Define (l? )j ? {lij | rj = 1} ? lj . In missing data settings, for every j, we only get to
observe the vector of values ((l? )j , rj ), and we wish to make inferences using the true realizations
(lj , rj ) from the underlying law. Doing this entails building a bridge between the observed and the
underlying realizations, and this bridge is provided by assumptions made on p(L, R).
If we can assume that for any i, p(Ri | L) = p(Ri ), in other words, every missing value is determined by an independent biased coin flip, then data is said to be missing completely at random
(MCAR) [15]. In this simple setting, it is known that any estimator for complete data remains consistent if applied to just the complete cases. A more complex assumption, known as missing at
random (MAR) [15], states that for every i, p(Ri | L) = p(Ri | L? ). In other words, every missing
value is determined by a biased coin flip that is independent of missing data values conditional on
the observed data values. In this setting, a variety of adjustments lead to consistent estimators.
The most interesting patterns of missingness, and the most relevant in practice, are those that do
not obey either of the above two assumptions, in which case data is said to be missing not at random (MNAR). Conventional wisdom in MNAR settings is that without strong parametric modeling
assumptions, many functions of the full data law are not identified from the observed data law.
Nevertheless, a series of recent papers [8, 7, 17], which represented missing data mechanisms as
graphical models, and exploited techniques developed in causal inference, have shown that the full
data law may be non-parametrically identified under MNAR.
In this approach, the full data law is assumed to factorize with respect to a directed acyclic graph
(DAG) [11]. Assumptions implied by this factorization are then used to derive functions of p(L)
in terms of p(R, L? ). We illustrate the approach using Fig. 1 (a),(b) and (c). Here nodes in green
are assumed to be completely observed. In Fig. 1 (a), the Markov factorization is p(R2 , L1 , L2 ) =
p(R2 | L1 )p(L2 | L1 )p(L1 ). It is easy to verify using d-separation [11] in this DAG that p(R2 |
L1 , L2 ) = p(R2 | L1 ). Since L1 is always observed, this setting is MAR, and we get the following
p(L1 , L2 ) = p(L2 |L1 )p(L1 ) = p(L2 |L1 , R2 = 1)p(L1 ) = p(R2 = 1, L? )/p(R2 = 1|L1 ), where
the last expression is a functional of p(R, L? ), and so the full data law p(L) is non-parametrically
identified from the observed data law p(R, L? ).
The ratio form of the identifying functional suggests the following simple IPW estimator for E[L2 ],
known as the Horvitz-Thompson estimator [4]. We estimate p(R2 | L1 ) either directly if L1 is
discrete and low dimensional, or using maximum likelihood fitting of a model for p(R2 | L1 ; ?),
for instance a logistic regression model. We then average observed values of L2 , but compensate for
the fact that observed and missing values of L2 systematically differ using the inverse of the fitted
? 2 ] = N ?1 P n Ln /p(R2 =
probability of the case being observed, conditional on L1 , or E[L
n:r =1 2
? Under our missingness model, this estimator is clearly unbiased. Under a number of
1 | l1n ; ?).
additional fairly mild conditions, this estimator is also consistent.
A more complicated graph, shown in Fig. 1 (b), implies the following factorization
p(L1 , L2 , R1 , R2 ) = p(R1 | L2 )p(R2 | L1 )p(L1 | L2 )p(L2 ).
(3)
Using d-separation in this DAG, we see that in cases where any values are missing, neither MCAR
nor MAR assumptions hold under this model. Thus, in this example, data is MNAR. However, the
conditional independence constraints implied by the factorization (3) imply the following
p(L1 , L2 ) =
p(R1 = 1, R2 = 1, L? )
.
p(R1 = 1 | L?2 , R2 = 1) ? p(R2 = 1 | L?1 , R1 = 1)
As before, all terms on the right hand side are functions of p(R, L? ), and so p(L) is nonparametrically identified from p(R, L? ). This example was discussed in [8].
The form of the identifying functional suggests a simple generalization of the IPW estimator from
the previous example for E[L2 ]. As before, we fit models p(R1 | L?2 ; ?1 ) and p(R2 | L?1 ; ?2 ) by
MLE. We take the empirical average of the observed values of L2 , but reweigh them by the inverses
of both of the estimated probabilities, using complete cases only:
X
1
1
l2n ?
.
n; ?
n ?
?
N
p(r
=
1
|
l
)
n
n
1
1 ? p(r1 = 1 | l ; ?2 )
2
n:r1 =r2 =1
1
This estimator is also consistent, with the proof a simple generalization of that for HorvitzThompson. More generally, it has been shown in [8] that in DAGs where no R variable has a
3
L1
L2 L2
L1 L2
L3
L1 L1
L2
L3 L1
L2
L3 L1
L2
L3
R2
R1
R2 R1
L4
R2 R3
R2
R1 R3
R2
R1 R3
R2
R1
(a)
(b)
(c)
(d)
(e)
(f )
Figure 1: (a) A graphical model for MAR data. (b),(c) Graphical model for MNAR data where
identification of the full data law is possible. (d) The no self-censoring model for k = 3. (e)
A missingness model seemingly similar to (d), where the full data law is not identified. (f) An
undirected graph representing an independence model Markov equivalent to the independence model
represented by a chain graph in (d).
child, and the edge Li ? Ri does not exist for any i, we get:
p(L? , R = 1)
.
Ri p(Ri | paG (Ri ), R{i|Li ?paG (Ri )} = 1)
p(L) = Q
This identifying functional implies consistent IPW estimators can be derived that are similar to
estimators in the above examples.
The difficulty with this result is that it assumes missingness indicators are disconnected. This assumption means we cannot model persistent dropout or loss to followup (where Ri = 0 at one time
point implies Ri = 0 at all following time points), or complex patterns of non-monotone missing
data (where data is missing intermittently, but missingness also exhibits complex dependence structure). This kind of dependence is represented by connecting R variables in the graph. Unfortunately,
this often leads to non-identification ? the functional of the full data law not being a function of the
observed data law. For instance, if we add an edge R1 ? R2 to Fig. 1 (b), it is known that p(L1 , L2 )
is not identified from p(R, L? ). Intuition for this is presented in Section 8.
A classical approach to missingness with connected R variables assumes sequential ignorability,
and monotone missingness (where there exists an ordering on variables such that every unit that?s
missing earlier in the ordering remains missing later in the ordering) [12]. However, this approach
does not easily generalize to data missing in non-monotone patterns and not at random.
Nevertheless, if a sufficient number of edges are missing in the graph, identification sometimes
is possible even if R variables are dependent, and monotonicity and MAR do not hold. In particular, techniques from causal inference have been using to derive complex identification results
in this setting [7, 17]. For instance, it has been shown that in Fig. 1 (c), p(L1 , L2 , L3 , L4 ) =
qL4 (L1 |R2 =1,R1 =1)qL4 (R2 =1)
p(L? ,R=1)
, where p?1 = qL4 (R1 = 1 | L2 , R2 = 1), p?2 = P
p?1 ?p?2
qL (L1 |R2 ,R1 =1)qL (R2 ) and
R2
4
4
qL4 (R1 , R2 , L1 , L2 , L3 ) = p(L1 , L2 , R1 , R2 | L3 , L4 )p(L3 ). See [17] for details. Unfortunately,
it is often difficult to give a practical missing data setting which exhibits the particular pattern of
missing edges that permits identification. In addition, a full characterization of identifiability of
functionals of the full data law under MNAR is an open problem. In the next sections, we generalize
the graphical model approach to missing data from DAGs to a particular type of chain graph. Our
model is able to encode fairly general settings where data is missing non-monotonically and not at
random, while also permitting identification of the full data law under fairly mild assumptions.
4
The No Self-Censoring Missingness Model
Having given the necessary preliminaries, we are ready to define our missingness model for data
missing non-monotonically and not at random. Our desiderata for such a model are as follows.
First, in order for our model to be useful in as wide a variety of missing data settings as possible,
we want to avoid imposing any assumptions on the underlying full data law. Second, since we wish
to consider arbitrary non-monotonic missingness patterns, we want to allow arbitrary relationships
between missingness indicators. Finally, since we wish to allow data to be missing not at random,
we want to allow as much dependence of missingness indicators on the underlying data, even if
missing, as possible.
4
However, a completely unrestricted relationship between underlying variables and missingness indicators can easily lead to non-identification. For instance in any graph where the edge Li ? Ri
exists, the marginal distribution p(Li ) is not in general a function of the observed data law. Thus, we
do not allow variables to drive their own missingness status, and thus edges of the form Li ? Ri .
However, we allow a variable to influence its own missingness status indirectly.
Surprisingly, the restrictions given so far essentially characterize independences defining our proposed model. Consider the following chain graph on vertices L1 , . . . Lk , R1 , . . . Rk . The vertices
L1 , . . . , Lk form a complete DAG, meaning that the full data law p(L1 , . . . , Lk ) has no restrictions.
The vertices R1 , . . . Rk form a k-clique, meaning arbitrary dependence structure between R variables is allowed. In addition, for every i, paG (Ri ) ? L \ {Li }, which restricts a variable Li from
directly causing its own missingness status Ri . The resulting graph is always a chain graph. An
example (for k = 3) is shown in Fig. 1 (c). The factorizations (1) and (2) for chain graphs of this
form imply a particular set of independence constraints.
Lemma 1 Let G be a chain graph with vertex set R ? L, where B(G) = {R, {L1 }, . . . {Lk }}, and
for every i, paG (Li ) = {L1 , . . . Li?1 }, paG (Ri ) = L \ {Li }. Then for every i, and every p(L, R)
that factorizes according to G, the only conditional independences implied by this factorization on
p(L, R) are (?i) (Ri ?
? Li | R \ {Ri }, L \ {Li }). 1
Proof: This follows by the global Markov property results for chain graphs, found in [6].
This set of independences in p(R, L) can be represented not only by a chain graph, but also by
an undirected graph where every pair vertices except Ri and Li (for every i) are connected. Such a
graph, interpreted as a Markov random field, would imply the same set of conditional independences
as those in Lemma 1. An example of such a graph for k = 3 is shown in Fig. 1 (f). The reason we
emphasize viewing the model using chain graphs is because the only independence restrictions we
place are on the conditional distribution p(R | L); these restrictions resemble those found in factors
of (1), and not in classical conditional Markov random fields, where every variable in R would
depend on every variable in L. We call the missingness model with this independence structure the
no self-censoring model, due to the fact that no variable Li is allowed to directly censor itself via
setting Ri to 0. We now show that under relatively benign assumptions, we can identify the full data
law p(L) in this model.
Lemma 2 If p(R = 1 | L) is identified from the observed data distribution p(L? , R = 1), then
p(L) is identified from p(L? , R = 1) via p(L? , R = 1)/p(R = 1 | L).
Proof: Trivially follows by the chain rule of probability, and the fact that L = L? if R = 1.
To obtain identification, we use a form of the log conditional pseudo-likelihood (LCPL) function,
first considered (in joint form) in [2]. Define, for any parameterization p(R | L; ?), where |R| = k,
log PL(?) =
k
X
i=1
X
log p(Ri = rij | Rj \ {Rij } = rj , Lj ; ?).
j:Lj \{Lji }?(L? )j
In subsequent discussion we will assume that if p1 (R | L; ?0 ) 6= p2 (R | L; ?) then ?0 6= ?.
Lemma 3 Under the no self-censoring model, in the limit of infinite data sampled from p(R, L),
where only L? , R is observed, log PL(?) is maximized at the true parameter values ?0 .
Proof: The proof follows that for the standard pseudo-likelihood in [9]. The difference between the
LCPL functions evaluated at ?0 and ? can be expressed as a sum of conditional relative entropies,
which is always non-negative. The fact that every term in the LCPL function is a function of the
observed data follows by Lemma 1.
We will restrict attention to function classes which satisfy standard assumptions needed to derive
consistent estimators [10], namely compactness of the parameter space, dominance, and (twice)
differentiability with respect to ?, which implies continuity.
1
A?
? B | C is notation found in [3], meaning A is independent of B given C.
5
Corollary 1 Under the no self-censoring model of missingness, and assumptions above, the estimator of ? maximizing the LCPL function is weakly consistent.
Proof: Follows by Lemma 3, and the argument in [9] via equation (9), Lemma 1 and Theorem 1.
5
Estimation
Since all variables in R are binary, and our model for p(R | L) is a type of conditional MRF, a
log-linear parameterization is natural. We thus adapt the following class of parameterizations:
?
?
?
?
X
1
p(R = r | L = l) =
exp
rR? ? fR? (lL\L? ; ?R? )
(4)
? ?
?
Z(l)
R ?P(R)\{?}
where L? ? {Li | Ri ? R? }, P(R) is the powerset of R, and for every R? , fR? is a function
parameterized by ?R? , mapping values of L \ L? to an |R? |-way interaction. Let ? ? {?R? | R? ?
P(R) \ {?}}. We now show our class of parameterizations gives the right independence structure.
Lemma 4 For an arbitrary p(L), and a conditional distribution p(R | L) parameterized as in (4),
the set of independences in Lemma 1 hold in the joint distribution p(L, R) = p(R | L)p(L).
Proof: For any Ri ? R, and values r, l, such that rRi = 1,
nP
o
exp
Ri ?R? ?P(R)\{?} rR? ? fR? (lL\L? ; ?R? )
o.
nP
p(rRi | rR\{Ri } , lL ) =
1 + exp
r
? ? fR? (lL\L? ; ?R? )
?
R
Ri ?R ?P(R)\{?}
By definition of fR? , this functional is not a function of Li , which gives our result.
As expected with a log-linear conditional MRF, the distribution p(Ri | R \ {Ri }, L) resembles the
logistic regression model. Under twice differentiability of fR? , first and second derivatives of the
LCPL function have a straightforward derivation, which we omit in the interests of space. Just as
with the logistic model, the estimating equations cannot be solved in closed form, but iterative algorithms are straightforward to construct. For sufficiently simple fR? , the Newton-Raphson algorithm
may be employed. Note that every conditional model for Ri is fit only using rows where L \ {Li }
are observed. Thus, the fitting procedure fails in datasets with few enough samples that for some
Ri , no such rows exist. We leave extensions of our model that deal with this issue to future work.
? as a joint IPW for estimating functions of p(L). For
Finally, we use our fitted model p(R | L; ?),
instance, if L1 , . . . Lk?1 represents intermediate outcomes, and Lk the final outcome of a longitudinal study with intermittent MNAR dropout represented by our model, and we are interested in the
expected final outcome, E[Lk ], we would extend IPW estimators discussed in Section 3 as follows:
? k ] = N ?1 P n ln /p(R = 1 | ln ; ?).
? Estimation of more complex functionals of p(L)
E[L
n:r =1 k
proceeds similarly, though it may employ marginal structural models if L is high-dimensional. Consistency of these estimators follows, under the usual assumptions, by standard arguments for IPW
estimators, and Corollary 1.
6
A Simple Simulation Study
To verify our results, we implemented our estimator for a simple model in the class of parameterizations (4) that satisfy the assumptions needed for deriving the true parameter by maximizing the LCPL function. Fig. 2 shows our results. For the purposes of illustration, we chose
the model in Fig. 1 (d) with functions fR? defined as follows. For every edge (Li , Rj ) in the
graph,
define a parameter wij , and a parameter w? . Define every function fR? to be of the form
P
i:Li ?L\L? ,j:Rj ?R? wij Li (1). The values of L1 , L2 , L3 were drawn from a multivariate normal
distribution with parameters ? = (1, 1, 1), ? = I + 1. We generated a series of datasets with sample
size 100 to 1000, and compared differences between the true means E[Li (1)] and the unadjusted
(complete case) MLE estimate of E[Li (1)] (blue), and IPW adjusted estimate of E[Li (1)] (red), for
i = 1, 2, 3. The true difference is, of course, 0. Confidence intervals at the 95% level were computing using case resampling bootstrap (50 iterations). The confidence intervals generally overlapped
6
L1 Mean with Sample Size
L2 Mean with Sample Size
L3 Mean with Sample Size
0.50
0.4
0.4
0.25
0.2
0.2
Estimated
Observed
True
Estimated
Observed
True
0.0
Estimated
Observed
True
0.00
0.0
?0.2
?0.25
?0.2
250
500
750
1000
(a)
250
500
750
(b)
1000
250
500
750
1000
(c)
Figure 2: (a),(b),(c) Results of estimating E[L1 (1)], E[L2 (1)] and E[L3 (1)], respectively, from a
model in Fig. 1 (d). Y axis is parameter value, and X axis is sample size. Confidence intervals are
reported using case resampling bootstrap at 95% level. Confidence interval size does not necessarily
shrink with sample size ? a known issue with IPW estimators.
0, while complete case analysis did not. We noted that confidence intervals did not always shrink
with increased sample size ? a known difficulty with IPW estimators.
Aside from the usual difficulties with IPW estimators, which are known to suffer from high variance,
our estimator only reweighs observed cases, which may in general be a small fraction of the overall
dataset as k grows (in our simulations only 50-60% of cases were complete). Furthermore, estimating weights by maximizing pseudo-likelihood is known to be less efficient than by maximizing
likelihood, since all variability of variables in the conditioning sets is ignored.
7
Analysis Application
To illustrate the performance of our model in a practical setting where data is missing not at random,
we report an analysis of a survey dataset for HIV-infected women in Botswana, also analyzed in [18].
The goal is to estimate an association between maternal exposure to highly active anti-retroviral
therapy (HAART) during pregnancy and a premature birth outcome among HIV-infected women
in Botswana. The overall data consisted of 33148 obstetrical records from 6 cites in Botswana.
Here we restricted to a subset of HIV positive women (n = 9711). We considered four features: the
outcome (preterm delivery), with 6.7% values missing, and two risk factors ? whether the CD4 count
(a measure of immune system health) was lower than 200 cells per ?L (53.1% missing), and whether
HAART was continued from before pregnancy (69.0% missing). We also included hypertension ? a
common comorbidity of HIV (6.5% missing). In this dataset missing at random is not a reasonable
assumption, and what?s more missingness patterns are not monotonic.
We used a no-self censoring model with fR? (.) of the same form as in section 6. The results
are shown in Fig. 3, which contain the complete case analysis (CC), the no self-censoring model
(NSCM), and a version of the discrete choice model in [18] (DCM). We reported the odds ratios
(ORs) with a 95% confidence interval, obtained by bootstrap. Note that CC and DCM confidence
intervals for the OR overlap 1, indicating a weak or non-existent effect. The confidence interval for
the NSCM indicates a somewhat non-intuitive inverse relationship for low CD4 count and premature
birth, which we believe may be due to assumptions of the NSCM not being met with a limited set
of four variables we considered. In fact, the dataset was sufficiently noisy that an expected positive
relationship was not found by any method.
8
Parameter Counting
Parameter counting may be used to give an intuition for why p(L) is identified under the no
self-censoring model, but not under a very similar missingness model where undirected edges
between R variables are replaced by directed edges under some ordering (see Fig. 1 (d) and
7
CC
NSCM
DCM
Low CD4 Count
0.782 (0.531, 1.135)
0.651 (0.414, 0.889)
1.020 (0.742, 1.397)
Cont HAART
1.142 (0.810, 1.620)
1.032 (0.670, 1.394)
1.158 (0.869, 1.560)
Figure 3: Analyses of the HIV pregnancy Botswana dataset. CC: complete case analysis, NSCM:
the no self-censoring model with a linear parameterization, DCM: a member of the discrete choice
model family described in [18].
(e) for an example for k = 3.) Assume |L| = k, where L variables are discrete with d levels. Then the observed data law may be parameterized by 2k ? 1 parameters for p(R), and by
?
dk?|R |?1 parameters for each p(L? | R? = 1, R \ R? = 0), where R? 6= ?, for a total of
P
?
2k ? 1 + R? ?P(R)\{?} |Rk? | (d|R | ? 1) = (d + 1)k ? 1. The no-censoring model needs
P
?
dk ? 1 parameters for p(L), and R? ?P(R)\{?} |Rk? | dk?|R | for p(R | L), yielding a total of
dk ? 1 + (d + 1)k ? dk = (d + 1)k ? 1, which means the model is just-identified, and imposes no
restrictions on the observed data law under our assumptions on L. However, the DAG model needs
Pk
dk ?1 parameters for p(L), and i=1 (dk?1 ?2i?1 ) for p(R | L), for a total of dk ?1+dk?1 ?(2k ?1).
The following Lemma implies the DAG version of the no self-censoring model is not identified.
Lemma 5 dk?1 ? (2k ? 1) > (d + 1)k ? dk for k ? 2, d ? 2.
Proof: For k = 2, we have 3d > 2d + 1, which holds for any d > 1. If our result holds for k, then
2k > (d + 1)k /dk?1 ? d + 1. Then the inequality holds for k + 1, since 2 > (d + 1)/d for d > 1.
Just identification under the independence structure given in Lemma 1 was used in [16] (independently of this paper) to derive a parameterization of the model that uses the observed data law. This
paper, by contrast, only models the missingness process represented by p(R | L), and does not
model the observed data law p(L? ) at all.
9
Conclusions
In this paper, we have presented a graphical missingness model based on chain graphs for data
missing non-monotonically and not at random. Specifically, our model places no restrictions on the
underlying full data law, and on the dependence structure of missingness indicators, and allows a
high degree of interdependence between the underlying unobserved variables and missingness indicators. Nevertheless, under our model, and fairly mild assumptions, the full data law is identified.
Our estimator is an inverse probability weighting estimator with the weights being joint probabilities
of the data being observed, conditional on all variables. The weights are fitted by maximizing the
log conditional pseudo likelihood function, first derived in joint form in [2].
We view our work as an alternative to existing and newly developed methods for MNAR data
[13, 18, 16], and an attempt to bridge the gap between the existing rich missing data literature
on identification and estimation strategies for MAR data (see [14] for further references), and newer
work which gave an increasingly sophisticated set of identification conditions for MNAR data using
missingness graphs [8, 7, 17]. The drawbacks of existing MAR methods is that most missingness
patterns of practical interest are not MAR, the drawbacks of the missingness graph literature is that
it has not yet considered estimation, and used assumptions on missingness that, while MNAR, are
difficult to justify in practice (for example Fig. 1 (c) implies a complicated identifying functional
under MNAR, but places a marginal independence restriction (L1 ?? L2 ) on the full data law).
Our work remedies both of these shortcomings. On the one hand, we assume a very general, and
thus easier to justify in practice, missingness model for MNAR data. On the other, we don?t just
consider an identification problem for our model, but give a class of IPW estimators for functions
of the observed data law. Addressing statistical and computational challenges posed by our class of
estimators, and making them practical for analysis of high dimensional MNAR data is our next step.
8
References
[1] Heejung Bang and James M. Robins. Doubly robust estimation in missing data and causal inference
models. Biometrics, 61:962?972, 2005.
[2] Julian Besag. Statistical analysis of lattice data. The Statistician, 24(3):179?195, 1975.
[3] A. Philip Dawid. Conditional independence in statistical theory. Journal of the Royal Statistical Society,
41:1?31, 1979.
[4] D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite
universe. Journal of the American Statistical Association, 47:663?685, 1952.
[5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine
Learning (ICML-01), pages 282 ? 289. Morgan Kaufmann, 2001.
[6] Steffan L. Lauritzen. Graphical Models. Oxford, U.K.: Clarendon, 1996.
[7] Karthika Mohan and Judea Pearl. Graphical models for recovering probabilistic and causal queries from
missing data. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 27, pages 1520?1528. Curran Associates, Inc., 2014.
[8] Karthika Mohan, Judea Pearl, and Jin Tian. Graphical models for inference with missing data. In C.J.C.
Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1277?1285. Curran Associates, Inc., 2013.
[9] A. Mozeika, O. Dikmen, and J. Piili. Consistent inference of a general model using the pseudolikelihood
method. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics., 90, 2014.
[10] Whitney Newey and Daniel McFadden. Chapter 35: Large sample estimation and hypothesis testing. In
Handbook of Econometrics, Vol.4, pages 2111?2245. Elsevier Science, 1994.
[11] Judea Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan and Kaufmann, San Mateo, 1988.
[12] James M. Robins. A new approach to causal inference in mortality studies with sustained exposure
periods ? application to control of the healthy worker survivor effect. Mathematical Modeling, 7:1393?
1512, 1986.
[13] James M. Robins. Non-response models for the analysis of non-monotone non-ignorable missing data.
Statistics in Medicine, 16:21?37, 1997.
[14] James M. Robins and Mark van der Laan. Unified Methods for Censored Longitudinal Data and Causality. Springer-Verlag New York, Inc., 2003.
[15] D. B. Rubin. Causal inference and missing data (with discussion). Biometrika, 63:581?592, 1976.
[16] Mauricio Sadinle and Jerome P. Reiter. Itemwise conditionally independent nonresponse modeling for
incomplete multivariate data. https://arxiv.org/abs/1609.00656, 2016. Working paper.
[17] Ilya Shpitser, Karthika Mohan, and Judea Pearl. Missing data as a causal and probabilistic problem.
In Proceedings of the Thirty First Conference on Uncertainty in Artificial Intelligence (UAI-15), pages
802?811. AUAI Press, 2015.
[18] Eric J. Tchetgen Tchetgen, Linbo Wang, and BaoLuo Sun. Discrete choice models for nonmonotone
nonignorable missing data: Identification and inference. https://arxiv.org/abs/1607.02631,
2016. Working paper.
9
| 6297 |@word mild:4 version:3 briefly:1 open:1 simulation:4 mcar:2 configuration:2 series:2 daniel:1 ours:1 longitudinal:3 horvitz:2 existing:3 nonmonotone:1 yet:1 john:1 subsequent:1 realistic:1 benign:1 resampling:2 aside:1 intelligence:1 parameterization:4 mccallum:1 record:5 characterization:1 parameterizations:3 node:1 org:2 mathematical:1 persistent:1 doubly:1 sustained:1 fitting:3 introduce:1 interdependence:1 expected:3 p1:1 nor:1 karthika:3 provided:1 estimating:4 underlying:9 notation:2 what:1 kind:1 interpreted:1 developed:2 unified:1 unobserved:1 pseudo:6 every:20 auai:1 biometrika:1 control:1 unit:1 omit:1 mauricio:1 segmenting:1 before:3 positive:2 limit:1 oxford:1 path:2 chose:1 twice:2 resembles:1 r1j:1 mateo:1 suggests:2 challenging:1 factorization:8 limited:1 tian:1 directed:6 practical:6 responsible:1 thirty:1 testing:1 practice:4 block:2 bootstrap:3 procedure:2 empirical:1 jhu:1 word:3 confidence:8 get:3 cannot:2 ga:1 risk:2 influence:1 restriction:8 conventional:1 map:1 equivalent:1 eighteenth:1 missing:55 maximizing:5 straightforward:2 attention:1 exposure:2 independently:1 thompson:2 survey:1 identifying:4 estimator:34 rule:1 continued:1 deriving:2 nonresponse:1 rri:2 target:2 us:1 curran:2 hypothesis:1 overlapped:1 element:1 dawid:1 associate:2 econometrics:1 ignorable:1 observed:30 rij:2 solved:1 wang:1 connected:4 cycle:3 sun:1 ordering:6 intuition:2 lji:1 existent:1 depend:1 ipw:13 weakly:1 exposed:1 eric:1 completely:5 perennial:1 joint:6 easily:2 represented:6 chapter:1 derivation:1 shortcoming:1 query:1 artificial:1 labeling:1 outcome:5 birth:3 hiv:5 encoded:1 posed:1 statistic:1 transform:1 itself:1 noisy:1 final:2 seemingly:2 sequence:1 rr:3 ilyas:1 propose:2 interaction:1 maximal:2 fr:10 causing:1 relevant:2 realization:3 subgraph:1 intuitive:1 r1:21 leave:1 illustrate:5 derive:4 lauritzen:1 p2:1 strong:1 implemented:1 c:1 resemble:1 implies:7 recovering:1 met:1 differ:1 drawback:2 viewing:1 observational:1 assign:1 fix:1 generalization:3 preliminary:3 adjusted:1 extension:1 pl:2 hold:6 therapy:2 considered:4 sufficiently:2 normal:1 exp:3 lawrence:1 mapping:1 purpose:1 estimation:9 healthy:1 bridge:3 clearly:1 always:4 avoid:1 factorizes:1 corollary:2 encode:1 derived:2 likelihood:8 indicates:1 survivor:1 contrast:1 besag:1 censor:1 inference:11 elsevier:1 dependent:1 lj:5 compactness:1 wij:2 interested:1 issue:2 overall:2 orientation:1 among:1 fairly:5 marginal:3 field:5 construct:1 having:2 sampling:1 newey:1 represents:1 icml:1 future:1 np:2 report:1 intelligent:1 few:1 employ:1 replaced:1 powerset:1 replacement:1 statistician:1 attempt:1 ab:2 interest:2 highly:2 unadjusted:1 analyzed:1 yielding:1 chain:18 edge:19 worker:1 necessary:1 censored:1 biometrics:1 incomplete:1 causal:7 fitted:3 instance:6 increased:1 modeling:4 earlier:1 soft:1 infected:2 mnar:13 whitney:1 lattice:1 addressing:1 subset:2 vertex:12 entry:1 parametrically:2 characterize:1 reported:2 corrupted:1 international:1 probabilistic:4 physic:1 connecting:1 ilya:2 hopkins:1 w1:2 mortality:1 woman:4 admit:1 american:1 derivative:1 shpitser:2 return:1 li:25 potential:1 inc:3 matter:1 satisfy:2 notable:1 later:2 view:1 closed:1 doing:1 red:1 complicated:2 identifiability:1 formed:1 l1j:1 variance:1 kaufmann:2 maximized:1 wisdom:1 identify:1 generalize:2 weak:1 identification:16 drive:1 corruption:2 cc:4 definition:2 james:4 associated:1 proof:8 judea:4 sampled:1 newly:1 dataset:5 organized:1 reweigh:1 sophisticated:1 clarendon:1 response:1 evaluated:1 though:1 mar:8 shrink:2 furthermore:1 just:5 jerome:1 hand:2 working:2 replacing:1 nonlinear:1 nonparametrically:1 continuity:1 logistic:3 grows:1 believe:1 building:1 effect:2 contain:2 true:8 verify:2 unbiased:1 consisted:1 remedy:1 reiter:1 deal:1 conditionally:1 ll:4 during:1 self:10 elect:1 noted:1 generalized:2 complete:9 l1:44 reasoning:1 meaning:3 intermittently:2 common:1 functional:7 physical:1 conditioning:1 discussed:2 extend:1 association:2 imposing:1 dag:8 trivially:1 consistency:1 similarly:1 immune:1 l3:11 entail:1 add:1 multivariate:2 own:3 recent:1 certain:1 verlag:1 inequality:1 binary:2 der:1 exploited:1 morgan:2 cd4:3 additional:2 unrestricted:1 somewhat:1 employed:1 period:1 monotonically:5 full:24 rj:8 adapt:1 compensate:1 raphson:1 mle:2 permitting:1 mrf:4 regression:2 desideratum:1 patient:2 essentially:1 arxiv:2 iteration:1 sometimes:1 cell:1 addition:2 want:3 interval:8 w2:2 biased:2 induced:1 undirected:11 member:1 lafferty:1 call:1 odds:1 structural:1 counting:3 intermediate:2 superset:1 easy:1 enough:1 variety:2 independence:17 fit:2 gave:1 followup:2 identified:13 restrict:1 whether:2 expression:1 gb:1 moral:1 akin:1 suffer:1 york:1 hypertension:1 ignored:1 generally:4 useful:1 comorbidity:1 induces:1 differentiability:2 http:2 exist:4 restricts:1 estimated:4 per:1 blue:1 conform:1 discrete:5 dropping:1 vol:1 dominance:1 four:2 nevertheless:3 drawn:2 neither:1 graph:44 monotone:4 fraction:1 sum:1 missingness:41 inverse:6 parameterized:3 uncertainty:1 place:5 family:1 preterm:1 reasonable:1 separation:2 delivery:1 dropout:2 ignorability:1 constraint:2 ri:32 argument:3 relatively:1 department:1 according:1 disconnected:1 increasingly:1 newer:1 making:1 restricted:1 ln:3 equation:2 remains:2 discus:3 r3:3 mechanism:2 count:3 needed:2 flip:5 permit:3 observe:1 obey:1 indirectly:1 alternative:1 coin:5 weinberger:2 l1n:1 assumes:2 graphical:9 newton:1 medicine:2 exploit:1 rkj:1 botswana:5 ghahramani:2 classical:3 society:1 implied:3 quantity:1 parametric:1 strategy:1 dependence:5 usual:2 said:2 exhibit:2 affinity:1 philip:1 reason:2 cont:1 relationship:4 illustration:1 ratio:2 julian:1 difficult:3 unfortunately:4 ql:2 negative:1 markov:8 datasets:2 finite:1 jin:1 anti:2 situation:1 unconnected:1 defining:1 variability:1 intermittent:1 arbitrary:4 pair:3 namely:1 pearl:4 able:1 proceeds:1 lkj:1 pattern:8 challenge:1 green:1 royal:1 overlap:1 natural:3 difficulty:3 indicator:7 representing:1 imply:3 lk:8 axis:2 ready:1 health:2 lij:1 prior:2 review:2 l2:31 literature:2 relative:1 law:34 embedded:1 loss:1 mcfadden:1 mixed:4 interesting:1 acyclic:1 degree:1 sufficient:1 consistent:15 imposes:1 rubin:1 pregnancy:3 editor:2 systematically:1 row:2 censoring:11 course:1 surprisingly:1 last:1 tchetgen:2 jth:1 side:1 allow:6 burges:1 pseudolikelihood:1 neighbor:1 wide:1 pag:9 van:1 valid:1 rich:1 made:2 san:1 premature:3 far:1 welling:2 functionals:2 emphasize:1 status:11 monotonicity:2 clique:6 global:1 active:2 uai:1 handbook:1 assumed:3 factorize:1 don:1 iterative:1 why:1 robin:4 robust:1 bottou:1 complex:8 necessarily:1 did:2 pk:1 universe:1 child:1 allowed:2 augmented:1 fig:13 causality:1 fails:1 pereira:1 wish:3 weighting:3 rk:4 theorem:1 r2:32 dk:12 cortes:1 exists:3 adding:1 sequential:1 mohan:3 gap:1 easier:1 entropy:1 maternal:1 expressed:1 adjustment:3 monotonic:4 springer:1 cite:1 dcm:4 conditional:24 viewed:2 goal:2 dikmen:1 bang:1 towards:1 included:1 determined:5 except:1 infinite:1 specifically:1 justify:3 laan:1 lemma:12 called:1 total:3 exception:1 l4:3 indicating:1 mark:1 meant:1 |
5,855 | 6,298 | Scaling Memory-Augmented Neural Networks with
Sparse Reads and Writes
Jack W Rae?
jwrae
Jonathan J Hunt?
jjhunt
Greg Wayne
gregwayne
Tim Harley
tharley
Alex Graves
gravesa
Ivo Danihelka
danihelka
Andrew Senior
andrewsenior
Timothy P Lillicrap
countzero
Google DeepMind
@google.com
Abstract
Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications
such as language modeling and machine translation. However, they scale poorly in
both space and time as the amount of memory grows ? limiting their applicability
to real-world domains. Here, we present an end-to-end differentiable memory
access scheme, which we call Sparse Access Memory (SAM), that retains the
representational power of the original approaches whilst training efficiently with
very large memories. We show that SAM achieves asymptotic lower bounds in
space and time complexity, and find that an implementation runs 1,000? faster
and with 3,000? less physical memory than non-sparse models. SAM learns with
comparable data efficiency to existing models on a range of synthetic tasks and
one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s
of time steps and memories. As well, we show how our approach can be adapted
for models that maintain temporal associations between memories, as with the
recently introduced Differentiable Neural Computer.
1
Introduction
Recurrent neural networks, such as the Long Short-Term Memory (LSTM) [11], have proven to be
powerful sequence learning models [6, 18]. However, one limitation of the LSTM architecture is
that the number of parameters grows proportionally to the square of the size of the memory, making
them unsuitable for problems requiring large amounts of long-term memory. Recent approaches,
such as Neural Turing Machines (NTMs) [7] and Memory Networks [21], have addressed this issue
by decoupling the memory capacity from the number of model parameters. We refer to this class
of models as memory augmented neural networks (MANNs). External memory allows MANNs to
learn algorithmic solutions to problems that have eluded the capabilities of traditional LSTMs, and to
generalize to longer sequence lengths. Nonetheless, MANNs have had limited success in real world
application.
A significant difficulty in training these models results from their smooth read and write operations,
which incur linear computational overhead on the number of memories stored per time step of
training. Even worse, they require duplication of the entire memory at each time step to perform
backpropagation through time (BPTT). To deal with sufficiently complex problems, such as processing
?
These authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a book, or Wikipedia, this overhead becomes prohibitive. For example, to store 64 memories,
a straightforward implementation of the NTM trained over a sequence of length 100 consumes
? 30 MiB physical memory; to store 64,000 memories the overhead exceeds 29 GiB (see Figure 1).
In this paper, we present a MANN named SAM (sparse access memory). By thresholding memory
modifications to a sparse subset, and using efficient data structures for content-based read operations,
our model is optimal in space and time with respect to memory size, while retaining end-to-end
gradient based optimization. To test whether the model is able to learn with this sparse approximation,
we examined its performance on a selection of synthetic and natural tasks: algorithmic tasks from
the NTM work [7], Babi reasoning tasks used with Memory Networks [17] and Omniglot one-shot
classification [16, 12]. We also tested several of these tasks scaled to longer sequences via curriculum
learning. For large external memories we observed improvements in empirical run-time and memory
overhead by up to three orders magnitude over vanilla NTMs, while maintaining near-identical data
efficiency and performance.
Further, in Supplementary D we demonstrate the generality of our approach by describing how to
construct a sparse version of the recently published Differentiable Neural Computer [8]. This Sparse
Differentiable Neural Computer (SDNC) is over 400? faster than the canonical dense variant for a
memory size of 2,000 slots, and achieves the best reported result in the Babi tasks without supervising
the memory access.
2
2.1
Background
Attention and content-based addressing
An external memory M 2 RN ?M is a collection of N real-valued vectors, or words, of fixed size M .
A soft read operation is defined to be a weighted average over memory words,
r=
N
X
w(i)M(i) ,
(1)
i=1
where w 2 RN is a vector of weights with non-negative entries that sum to one. Attending to
memory is formalized as the problem of computing w. A content addressable memory, proposed
in [7, 21, 2, 17], is an external memory with an addressing scheme which selects w based upon the
similarity of memory words to a given query q. Specifically, for the ith read weight w(i) we define,
f (d(q, M(i)))
w(i) = PN
,
j=1 f (d(q, M(j))
(2)
where d is a similarity measure, typically Euclidean distance or cosine similarity, and f is a differentiable monotonic transformation, typically a softmax. We can think of this as an instance of kernel
smoothing where the network learns to query relevant points q. Because the read operation (1) and
content-based addressing scheme (2) are smooth, we can place them within a neural network, and
train the full model using backpropagation.
2.2
Memory Networks
One recent architecture, Memory Networks, make use of a content addressable memory that is
accessed via a series of read operations [21, 17] and has been successfully applied to a number
of question answering tasks [20, 10]. In these tasks, the memory is pre-loaded using a learned
embedding of the provided context, such as a paragraph of text, and then the controller, given an
embedding of the question, repeatedly queries the memory by content-based reads to determine an
answer.
2.3
Neural Turing Machine
The Neural Turing Machine is a recurrent neural network equipped with a content-addressable
memory, similar to Memory Networks, but with the additional capability to write to memory over
time. The memory is accessed by a controller network, typically an LSTM, and the full model is
differentiable ? allowing it to be trained via BPTT.
2
A write to memory,
Mt
(1
Rt )
Mt
1
+ At ,
(3)
consists of a copy of the memory from the previous time step Mt 1 decayed by the erase matrix
Rt indicating obsolete or inaccurate content, and an addition of new or updated information At .
The erase matrix Rt = wtW eTt is constructed as the outer product between a set of write weights
wtW 2 [0, 1]N and erase vector et 2 [0, 1]M . The add matrix AT = wtW aTt is the outer product
between the write weights and a new write word at 2 RM , which the controller outputs.
3
Architecture
This paper introduces Sparse Access Memory (SAM), a new neural memory architecture with two
innovations. Most importantly, all writes to and reads from external memory are constrained to a
sparse subset of the memory words, providing similar functionality as the NTM, while allowing
computational and memory efficient operation. Secondly, we introduce a sparse memory management
scheme that tracks memory usage and finds unused blocks of memory for recording new information.
For a memory containing N words, SAM executes a forward, backward step in ?(log N ) time, initializes in ?(N ) space, and consumes ?(1) space per time step. Under some reasonable assumptions,
SAM is asymptotically optimal in time and space complexity (Supplementary A).
3.1
Read
The sparse read operation is defined to be a weighted average over a selection of words in memory:
r?t =
K
X
w
?tR (si )Mt (si ),
(4)
i=1
where w
?tR 2 RN contains K number of non-zero entries with indices s1 , s2 , . . . , sK ; K is a small
constant, independent of N , typically K = 4 or K = 8. We will refer to sparse analogues of weight
vectors w as w,
? and when discussing operations that are used in both the sparse and dense versions of
our model use w.
We wish to construct w
?tR such that r?t ? rt . For content-based reads where wtR is defined by (2), an
effective approach is to keep the K largest non-zero entries and set the remaining entries to zero.
We can compute w
?tR naively in O(N ) time by calculating wtR and keeping the K largest values.
However, linear-time operation can be avoided. Since the K largest values in wtR correspond to the K
closest points to our query qt , we can use an approximate nearest neighbor data-structure, described
in Section 3.5, to calculate w
?tR in O(log N ) time.
Sparse read can be considered a special case of the matrix-vector product defined in (1), with two key
distinctions. The first is that we pass gradients for only a constant K number of rows of memory per
time step, versus N , which results in a negligible fraction of non-zero error gradient per timestep
when the memory is large. The second distinction is in implementation: by using an efficient sparse
matrix format such as Compressed Sparse Rows (CSR), we can compute (4) and its gradients in
constant time and space (see Supplementary A).
3.2
Write
The write operation is SAM is an instance of (3) where the write weights w
?tW are constrained to
contain a constant number of non-zero entries. This is done by a simple scheme where the controller
writes either to previously read locations, in order to update contextually relevant memories, or the
least recently accessed location, in order to overwrite stale or unused memory slots with fresh content.
The introduction of sparsity could be achieved via other write schemes. For example, we could use
a sparse content-based write scheme, where the controller chooses a query vector qtW and applies
writes to similar words in memory. This would allow for direct memory updates, but would create
problems when the memory is empty (and shift further complexity to the controller). We decided
upon the previously read / least recently accessed addressing scheme for simplicity and flexibility.
3
The write weights are defined as
wtW = ?t
t
wtR 1 + (1
U
t ) It
,
(5)
where the controller outputs the interpolation gate parameter t and the write gate parameter ?t . The
write to the previously read locations wtR 1 is purely additive, while the least recently accessed word
R
IU
t is set to zero before being written to. When the read operation is sparse (wt 1 has K non-zero
entries), it follows the write operation is also sparse.
We define IU
t to be an indicator over words in memory, with a value of 1 when the word minimizes a
usage measure Ut
(
1 if Ut (i) = min Ut (j)
j=1,...,N
IU
(6)
t (i) =
0 otherwise.
If there are several words that minimize Ut then we choose arbitrarily between them. We tried
(1)
two definitions of Ut . The first definition is a time-discounted sum of write weights UT (i) =
PT
T t
(wtW (i) + wtR (i)) where is the discount factor. This usage definition is incorporated
t=0
within Dense Access Memory (DAM), a dense-approximation to SAM that is used for experimental
comparison in Section 4.
The second usage definition, used by SAM, is simply the number of time-steps since a non-negligible
(2)
memory access: UT (i) = T max { t : wtW (i) + wtR (i) > } . Here, is a tuning parameter that
we typically choose to be 0.005. We maintain this usage statistic in constant time using a custom
data-structure (described in Supplementary A). Finally we also use the least recently accessed word
T
to calculate the erase matrix. Rt = IU
t 1 is defined to be the expansion of this usage indicator where
1 is a vector of ones. The total cost of the write is constant in time and space for both the forwards
and backwards pass, which improves on the linear space and time dense write (see Supplementary
A).
3.3
Controller
We use a one layer LSTM for the controller throughout. At each time step, the LSTM receives a
concatenation of the external input, xt , the word, rt 1 read in the previous time step. The LSTM
then produces a vector, pt = (qt , at , ?t , t ), of read and write parameters for memory access via a
linear layer. The word read from memory for the current time step, rt , is then concatenated with the
output of the LSTM, and this vector is fed through a linear layer to form the final output, yt . The full
control flow is illustrated in Supplementary Figure 6.
3.4
Efficient backpropagation through time
We have already demonstrated how the forward operations in SAM can be efficiently computed
in O(T log N ) time. However, when considering space complexity of MANNs, there remains a
dependence on Mt for the computation of the derivatives at the corresponding time step. A naive
implementation requires the state of the memory to be cached at each time step, incurring a space
overhead of O(N T ), which severely limits memory size and sequence length.
Fortunately, this can be remedied. Since there are only O(1) words that are written at each time step,
we instead track the sparse modifications made to the memory at each timestep, apply them in-place
to compute Mt in O(1) time and O(T ) space. During the backward pass, we can restore the state of
Mt from Mt+1 in O(1) time by reverting the sparse modifications applied at time step t. As such the
memory is actually rolled back to previous states during backpropagation (Supplementary Figure 5).
At the end of the backward pass, the memory ends rolled back to the start state. If required, such as
when using truncating BPTT, the final memory state can be restored by making a copy of MT prior
to calling backwards in O(N ) time, or by re-applying the T sparse updates in O(T ) time.
3.5
Approximate nearest neighbors
When querying the memory, we can use an approximate nearest neighbor index (ANN) to search over
the external memory for the K nearest words. Where a linear KNN search inspects every element in
4
memory (taking O(N ) time), an ANN index maintains a structure over the dataset to allow for fast
inspection of nearby points in O(log N ) time.
In our case, the memory is still a dense tensor that the network directly operates on; however the
ANN is a structured view of its contents. Both the memory and the ANN index are passed through
the network and kept in sync during writes. However there are no gradients with respect to the ANN
as its function is fixed.
We considered two types of ANN indexes: FLANN?s randomized k-d tree implementation [15] that
arranges the datapoints in an ensemble of structured (randomized k-d) trees to search for nearby
points via comparison-based search, and one that uses locality sensitive hash (LSH) functions that
map points into buckets with distance-preserving guarantees. We used randomized k-d trees for small
word sizes and LSHs for large word sizes. For both ANN implementations, there is an O(log N ) cost
for insertion, deletion and query. We also rebuild the ANN from scratch every N insertions to ensure
it does not become imbalanced.
4
Results
4.1
Speed and memory benchmarks
100GiB
104
103
170
DA0
6A0 lineDr
6A0 A11
11.9s
10GiB
0emRry
WDll 7ime [ms]
105
102
7.3ms
101
100 1
10
102
103
104
105
1umber Rf memRry slRWs (1)
106
1GiB
170
DA0
6A0 lineDr
6A0 A11
29.2GiB
1000iB
7.80iB
100iB
10iB 0
10
107
(a)
101
102
103
104
1umber Rf memRry slRts (1)
105
106
(b)
Figure 1: (a) Wall-clock time of a single forward and backward pass. The k-d tree is a FLANN
randomized ensemble with 4 trees and 32 checks. For 1M memories a single forward and backward
pass takes 12 s for the NTM and 7 ms for SAM, a speedup of 1600?. (b) Memory used to train over
sequence of 100 time steps, excluding initialization of external memory. The space overhead of SAM
is independent of memory size, which we see by the flat line. When the memory contains 64,000
words the NTM consumes 29 GiB whereas SAM consumes only 7.8 MiB, a compression ratio of
3700.
We measured the forward and backward times of the SAM architecture versus the dense DAM variant
and the original NTM (details of setup in Supplementary E). SAM is over 100 times faster than the
NTM when the memory contains one million words and an exact linear-index is used, and 1600 times
faster with the k-d tree (Figure 1a). With an ANN the model runs in sublinear time with respect
to the memory size. SAM?s memory usage per time step is independent of the number of memory
words (Figure 1b), which empirically verifies the O(1) space claim from Supplementary A. For 64 K
memory words SAM uses 53 MiB of physical memory to initialize the network and 7.8 MiB to run a
100 step forward and backward pass, compared with the NTM which consumes 29 GiB.
4.2
Learning with sparse memory access
We have established that SAM reaps a huge computational and memory advantage of previous models,
but can we really learn with SAM?s sparse approximations? We investigated the learning cost of
inducing sparsity, and the effect of placing an approximate nearest neighbor index within the network,
by comparing SAM with its dense variant DAM and some established models, the NTM and the
LSTM.
We trained each model on three of the original NTM tasks [7]. 1. Copy: copy a random input sequence
of length 1?20, 2. Associative Recall: given 3-6 random (key, value) pairs, and subsequently a cue
key, return the associated value. 3. Priority Sort: Given 20 random keys and priority values, return
5
40
20
100
4
80
Cost
CRst
30
120
6
Cost
LST0
1T0
DA0
SA0 lLneDr
SA0 A11
2
60
40
10
20
0
50000
100000
1umber Rf eSLsRdes
(a) Copy
0
10000
20000
30000
40000
1umber of episodes
(b) Associative Recall
50000
100000
1umber of episodes
(c) Priority Sort
Figure 2: Training curves for sparse (SAM) and dense (DAM, NTM) models. SAM trains comparably
for the Copy task, and reaches asymptotic error significantly faster for Associative Recall and Priority
Sort.Light colors indicate one standard deviation over 30 random seeds.
the top 16 keys in descending order of priority. We chose these tasks because the NTM is known to
perform well on them.
Figure 2 shows that sparse models are able to learn with comparable efficiency to the dense models
and, surprisingly, learn more effectively for some tasks ? notably priority sort and associative recall.
This shows that sparse reads and writes can actually benefit early-stage learning in some cases.
Full hyperparameter details are in Supplementary C.
4.3
Scaling with a curriculum
The computational efficiency of SAM opens up the possibility of training on tasks that require storing
a large amount of information over long sequences. Here we show this is possible in practice, by
scaling tasks to a large scale via an exponentially increasing curriculum.
We parametrized three of the tasks described in Section 4.2: associative recall, copy, and priority
sort, with a progressively increasing difficulty level which characterises the length of the sequence
and number of entries to store in memory. For example, level specifies the input sequence length for
the copy task. We exponentially increased the maximum level h when the network begins to learn
the fundamental algorithm. Since the time taken for a forward and backward pass scales O(T ) with
the sequence length T , following a standard linearly increasing curriculum could potentially take
O(T 2 ), if the same amount of training was required at each step of the curriculum. Specifically, h
was doubled whenever the average training loss dropped below a threshold for a number of episodes.
The level was sampled for each minibatch from the uniform distribution over integers U (0, h).
We compared the dense models, NTM and DAM, with both SAM with an exact nearest neighbor
index (SAM linear) and with locality sensitive hashing (SAM ANN). The dense models contained 64
memory words, while the sparse models had 2 ? 106 words. These sizes were chosen to ensure all
models use approximately the same amount of physical memory when trained over 100 steps.
For all tasks, SAM was able to advance further than the other models, and in the associative recall
task, SAM was able to advance through the curriculum to sequences greater than 4000 (Figure 3).
Note that we did not use truncated backpropagation, so this involved BPTT for over 4000 steps with
a memory size in the millions of words.
To investigate whether SAM was able to learn algorithmic solutions to tasks, we investigated its
ability to generalize to sequences that far exceeded those observed during training. Namely we
trained SAM on the associative recall task up to sequences of length 10, 000, and found it was then
able to generalize to sequences of length 200,000 (Supplementary Figure 8).
4.4
Question answering on the Babi tasks
[20] introduced toy tasks they considered a prerequisite to agents which can reason and understand
natural language. They are synthetically generated language tasks with a vocab of about 150 words
that test various aspects of simple reasoning such as deduction, induction and coreferencing.
6
105
104
103
102
10
103
102
102
L670
170
DA0
6A0 lLneDr
6A0 A11
101
101
1
100
102
103
L6T0
1T0
DA0
6A0 lLneDr
6A0 A11
DLffLculty Level
DLffLculty Level
104
DLffLculty Level
L6T0
1T0
DA0
6A0 lLneDr
6A0 A11
103
104
105
106
100
102
103
104
105
106
100
102
103
104
105
ESLsRde 1R
ESLsRde 1R
ESLsRde 1R
(a)
(b)
(c)
106
107
Figure 3: Curriculum training curves for sparse and dense models on (a) Associative recall, (b) Copy,
and (c) Priority sort. Difficulty level indicates the task difficulty (e.g. the length of sequence for
copy). We see SAM train (and backpropagate over) episodes with thousands of steps, and tasks which
require thousands of words to be stored to memory. Each model is averaged across 5 replicas of
identical hyper-parameters (light lines indicate individual runs).
We tested the models (including the Sparse Differentiable Neural Computer described in Supplementary D) on this task. The full results and training details are described in Supplementary G.
The MANNs, except the NTM, are able to learn solutions comparable to the previous best results,
failing at only 2 of the tasks. The SDNC manages to solve all but 1 of the tasks, the best reported
result on Babi that we are aware of.
Notably the best prior results have been obtained by using supervising the memory retrieval (during
training the model is provided annotations which indicate which memories should be used to answer
a query). More directly comparable previous work with end-to-end memory networks, which did not
use supervision [17], fails at 6 of the tasks.
Both the sparse and dense perform comparably at this task, again indicating the sparse approximations
do not impair learning. We believe the NTM may perform poorly since it lacks a mechanism which
allows it to allocate memory effectively.
4.5
Learning on real world data
Finally, we demonstrate that the model is capable of learning in a non-synthetic dataset. Omniglot
[12] is a dataset of 1623 characters taken from 50 different alphabets, with 20 examples of each
character. This dataset is used to test rapid, or one-shot learning, since there are few examples of
each character but many different character classes. Following [16], we generate episodes where a
subset of characters are randomly selected from the dataset, rotated and stretched, and assigned a
randomly chosen label. At each time step an example of one of the characters is presented, along
with the correct label of the proceeding character. Each character is presented 10 times in an episode
(but each presentation may be any one of the 20 examples of the character). In order to succeed at the
task the model must learn to rapidly associate a novel character with the correct label, such that it can
correctly classify subsequent examples of the same character class.
Again, we used an exponential curriculum, doubling the number of additional characters provided to
the model whenever the cost was reduced under a threshold. After training all MANNs for the same
length of time, a validation task with 500 characters was used to select the best run, and this was then
tested on a test set, containing all novel characters for different sequence lengths (Figure 4). All of
the MANNs were able to perform much better than chance, even on sequences ? 4? longer than
seen during training. SAM outperformed other models, presumably due to its much larger memory
capacity. Previous results on the Omniglot curriculum [16] task are not identical, since we used 1-hot
labels throughout and the training curriculum scaled to longer sequences, but our results with the
dense models are comparable (? 0.4 errors with 100 characters), while the SAM is significantly
better (0.2 < errors with 100 characters).
7
Figure 4: Test errors for the Omniglot task (described in the text) for the best runs (as chosen by the
validation set). The characters used in the test set were not used in validation or training. All of the
MANNs were able to perform much better than chance with ? 500 characters (sequence lengths
of ? 5000), even though they were trained, at most, on sequences of ? 130 (chance is 0.002 for
500 characters). This indicates they are learning generalizable solutions to the task. SAM is able to
outperform other approaches, presumably because it can utilize a much larger memory.
5
Discussion
Scaling memory systems is a pressing research direction due to potential for compelling applications
with large amounts of memory. We have demonstrated that you can train neural networks with large
memories via a sparse read and write scheme that makes use of efficient data structures within the
network, and obtain significant speedups during training. Although we have focused on a specific
MANN (SAM), which is closely related to the NTM, the approach taken here is general and can be
applied to many differentiable memory architectures, such as Memory Networks [21].
It should be noted that there are multiple possible routes toward scalable memory architectures. For
example, prior work aimed at scaling Neural Turing Machines [22] used reinforcement learning to
train a discrete addressing policy. This approach also touches only a sparse set of memories at each
time step, but relies on higher variance estimates of the gradient during optimization. Though we can
only guess at what class of memory models will become staple in machine learning systems of the
future, we argue in Supplementary A that they will be no more efficient than SAM in space and time
complexity if they address memories based on content.
We have experimented with randomized k-d trees and LSH within the network to reduce the forward
pass of training to sublinear time, but there may be room for improvement here. K-d trees were not
designed specifically for fully online scenarios, and can become imbalanced during training. Recent
work in tree ensemble models, such as Mondrian forests [13], show promising results in maintaining
balanced hierarchical set coverage in the online setting. An alternative approach which may be
well-suited is LSH forests [3], which adaptively modifies the number of hashes used. It would be an
interesting empirical investigation to more fully assess different ANN approaches in the challenging
context of training a neural network.
Humans are able to retain a large, task-dependent set of memories obtained in one pass with a
surprising amount of fidelity [4]. Here we have demonstrated architectures that may one day compete
with humans at these kinds of tasks.
Acknowledgements
We thank Vyacheslav Egorov, Edward Grefenstette, Malcolm Reynolds, Fumin Wang and Yori Zwols
for their assistance, and the Google DeepMind family for helpful discussions and encouragement.
8
References
[1] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An optimal
algorithm for approximate nearest neighbor searching fixed dimensions. J. ACM, 45(6):891?923, November
1998.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[3] Mayank Bawa, Tyson Condie, and Prasanna Ganesan. Lsh forest: self-tuning indexes for similarity search.
In Proceedings of the 14th international conference on World Wide Web, pages 651?660. ACM, 2005.
[4] Timothy F Brady, Talia Konkle, George A Alvarez, and Aude Oliva. Visual long-term memory has a massive
storage capacity for object details. Proceedings of the National Academy of Sciences, 105(38):14325?14329,
2008.
[5] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for
machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[6] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural
networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on,
pages 6645?6649. IEEE, 2013.
[7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401,
2014.
[8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi?nska,
Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing
using a neural network with dynamic external memory. Nature, 2016.
[9] Ga?l Guennebaud, Beno?t Jacob, Philip Avery, Abraham Bachrach, Sebastien Barthelemy, et al. Eigen v3,
2010.
[10] Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children?s
books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.
[11] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[12] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through
probabilistic program induction. Science, 350(6266):1332?1338, 2015.
[13] Balaji Lakshminarayanan, Daniel M Roy, and Yee Whye Teh. Mondrian forests: Efficient online random
forests. In Advances in Neural Information Processing Systems, pages 3140?3148, 2014.
[14] Rajeev Motwani, Assaf Naor, and Rina Panigrahy. Lower bounds on locality sensitive hashing. SIAM
Journal on Discrete Mathematics, 21(4):930?935, 2007.
[15] Marius Muja and David G. Lowe. Scalable nearest neighbor algorithms for high dimensional data. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 36, 2014.
[16] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and T Lillicrap. Meta-learning with
memory-augmented neural networks. In International conference on machine learning, 2016.
[17] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in
Neural Information Processing Systems, pages 2431?2439, 2015.
[18] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In
Advances in Neural Information Processing Systems 27, pages 3104?3112. Curran Associates, Inc., 2014.
[19] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
[20] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri?nboer, Armand Joulin,
and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv
preprint arXiv:1502.05698, 2015.
[21] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916,
2014.
[22] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint
arXiv:1505.00521, 2015.
9
| 6298 |@word armand:1 version:2 compression:1 bptt:4 open:1 tried:1 jacob:1 tr:5 shot:3 series:1 att:1 contains:3 daniel:1 reynolds:2 existing:1 current:1 com:1 comparing:1 surprising:1 si:2 written:2 must:1 john:1 additive:1 subsequent:1 ronan:1 designed:1 update:3 progressively:1 hash:2 sukhbaatar:1 cue:1 prohibitive:1 obsolete:1 selected:1 ivo:3 guess:1 inspection:1 intelligence:1 ith:1 short:2 location:3 accessed:6 wierstra:1 along:1 constructed:1 direct:1 become:3 consists:1 naor:1 avery:1 overhead:6 sync:1 assaf:1 paragraph:1 introduce:1 notably:2 rapid:1 vocab:1 discounted:1 salakhutdinov:1 equipped:1 considering:1 increasing:3 becomes:1 spain:1 provided:3 erase:4 begin:1 qtw:1 what:1 grabska:1 kind:1 minimizes:1 deepmind:2 generalizable:1 whilst:1 transformation:1 brady:1 guarantee:1 temporal:1 every:2 zaremba:1 botvinick:1 scaled:2 rm:1 control:1 wayne:3 appear:1 danihelka:4 before:1 negligible:2 dropped:1 felix:1 limit:1 severely:1 mount:1 interpolation:1 approximately:1 chose:1 initialization:1 examined:1 challenging:1 contextually:1 hunt:1 limited:1 range:1 reaps:1 averaged:1 decided:1 practice:1 block:1 writes:6 backpropagation:5 silverman:1 addressable:3 countzero:1 empirical:2 significantly:2 word:28 pre:1 ett:1 staple:1 barthelemy:1 doubled:1 ga:1 selection:2 storage:1 context:2 dam:5 applying:1 yee:1 descending:1 map:1 demonstrated:3 yt:1 modifies:1 straightforward:1 attention:1 sepp:1 truncating:1 focused:1 bachrach:1 tomas:1 formalized:1 simplicity:1 arranges:1 attending:1 importantly:1 goldilocks:1 datapoints:1 embedding:2 searching:1 beno:1 overwrite:1 merri:1 limiting:1 updated:1 pt:2 massive:1 exact:2 us:2 curran:1 associate:2 element:1 roy:1 recognition:2 balaji:1 observed:2 preprint:6 wang:1 calculate:2 thousand:2 rina:1 coursera:1 episode:6 consumes:5 balanced:1 environment:1 complexity:5 insertion:2 rmsprop:1 dynamic:1 trained:6 mondrian:2 incur:1 purely:1 upon:2 efficiency:4 icassp:1 various:1 alphabet:1 train:6 ramalho:1 fast:1 effective:1 query:7 hyper:1 supplementary:14 valued:1 solve:1 larger:2 tested:3 otherwise:1 compressed:1 ability:2 statistic:1 knn:1 think:1 jointly:1 final:2 associative:8 online:3 sequence:23 differentiable:8 advantage:1 pressing:1 ment:1 product:3 relevant:2 rapidly:1 translate:1 poorly:2 flexibility:1 representational:1 academy:1 konkle:1 inducing:1 sutskever:2 empty:1 lshs:1 motwani:1 produce:1 cached:1 a11:6 adam:1 rotated:1 object:1 supervising:2 tim:2 andrew:1 recurrent:3 measured:1 nearest:8 qt:2 edward:2 coverage:1 gib:7 indicate:3 direction:1 closely:1 correct:2 functionality:1 subsequently:1 human:3 mann:10 require:3 wall:1 really:1 investigation:1 sainbayar:1 secondly:1 sufficiently:1 considered:3 presumably:2 seed:1 algorithmic:4 claim:1 matthew:1 rgen:1 achieves:2 early:1 failing:1 ruslan:1 outperformed:1 label:4 sensitive:3 largest:3 create:1 successfully:1 weighted:2 biglearn:1 pn:1 improvement:2 check:1 indicates:2 helpful:1 rebuild:1 dependent:1 epfl:1 inaccurate:1 entire:1 typically:5 da0:6 santoro:1 a0:10 deduction:1 umber:5 selects:1 iu:4 issue:1 classification:1 fidelity:1 retaining:1 smoothing:1 softmax:1 constrained:2 special:1 initialize:1 construct:2 aware:1 koray:1 identical:3 placing:1 future:1 tyson:1 yoshua:1 few:1 randomly:2 ime:1 national:1 individual:1 maintain:2 harley:2 huge:1 rae:1 possibility:1 investigate:1 custom:1 introduces:1 rolled:2 light:2 bart:1 capable:1 mib:4 tree:9 euclidean:1 flann:2 divide:1 re:1 rush:1 instance:2 increased:1 modeling:1 soft:1 classify:1 compelling:1 retains:1 applicability:1 cost:6 addressing:5 subset:3 entry:7 deviation:1 uniform:1 sumit:3 stored:2 reported:2 answer:2 synthetic:3 chooses:1 adaptively:1 cho:1 decayed:1 lstm:8 randomized:5 fundamental:1 vyacheslav:1 retain:1 mayank:1 international:3 probabilistic:1 siam:1 ilya:2 again:2 management:1 containing:2 choose:2 priority:8 worse:1 external:10 book:2 conf:1 derivative:1 return:2 guennebaud:1 toy:2 wojciech:1 potential:1 lakshminarayanan:1 inc:1 collobert:1 view:1 jason:4 lowe:1 start:1 sort:6 maintains:1 capability:2 annotation:1 ass:1 square:1 minimize:1 greg:3 loaded:1 variance:1 efficiently:2 ensemble:3 correspond:1 generalize:3 kavukcuoglu:1 comparably:2 manages:1 published:1 executes:1 reach:1 whenever:2 farabet:1 definition:4 nonetheless:1 involved:1 mohamed:1 associated:1 sunil:1 sampled:1 dataset:5 recall:8 color:1 ut:7 improves:1 barwi:1 actually:2 back:2 exceeded:1 hashing:2 higher:1 day:1 alvarez:1 zwols:1 done:1 though:2 generality:1 mez:1 stage:1 clock:1 rahman:1 receives:1 babi:4 lstms:1 touch:1 web:1 rajeev:1 lack:1 google:3 minibatch:1 grows:2 stale:1 believe:1 usage:7 effect:1 lillicrap:2 requiring:2 contain:1 concept:1 aude:1 assigned:1 kyunghyun:1 read:22 illustrated:1 deal:1 assistance:1 during:9 self:1 noted:1 cosine:1 m:3 whye:1 hill:1 complete:1 demonstrate:2 reasoning:2 jack:1 novel:2 recently:6 wikipedia:1 muja:1 mt:9 physical:4 empirically:1 exponentially:2 million:2 association:1 refer:2 significant:2 ai:1 stretched:1 encouragement:1 tuning:2 vanilla:1 mathematics:1 omniglot:5 language:3 had:2 lsh:4 access:9 longer:4 similarity:4 supervision:1 add:1 align:1 sergio:1 closest:1 imbalanced:2 recent:4 scenario:1 store:3 route:1 schmidhuber:1 meta:1 success:1 discussing:1 arbitrarily:1 joshua:1 preserving:1 seen:1 additional:2 fortunately:1 greater:1 george:1 determine:1 v3:1 ntm:16 signal:1 full:5 multiple:1 smooth:2 exceeds:1 faster:5 long:5 wtr:7 retrieval:1 equally:1 variant:3 scalable:2 oliva:1 controller:9 arxiv:12 kernel:1 sergey:1 achieved:1 hochreiter:1 background:1 addition:1 whereas:1 addressed:1 nska:1 duplication:1 recording:1 bahdanau:1 flow:1 call:1 integer:1 near:1 chopra:3 unused:2 backwards:2 synthetically:1 bengio:1 bartunov:1 architecture:8 reduce:1 shift:1 t0:3 whether:2 allocate:1 passed:1 torch7:1 speech:2 repeatedly:1 matlab:1 deep:1 gravesa:1 proportionally:1 aimed:1 amount:7 discount:1 tenenbaum:1 reduced:1 generate:1 specifies:1 outperform:1 canonical:1 per:5 track:2 correctly:1 write:20 hyperparameter:1 discrete:2 key:5 threshold:2 wtw:6 kept:1 backward:8 replica:1 timestep:2 asymptotically:1 utilize:1 fraction:1 sum:2 run:7 turing:6 compete:1 powerful:1 you:1 named:1 place:2 throughout:2 reasonable:1 family:1 wu:1 lake:1 scaling:5 comparable:5 bound:2 layer:3 adapted:1 alex:4 flat:1 calling:1 nearby:2 aspect:1 speed:1 nathan:1 min:1 nboer:1 mikolov:1 sa0:2 format:1 marius:1 speedup:2 structured:2 across:1 sam:37 character:20 tw:1 rob:1 making:2 modification:3 s1:1 quoc:1 bucket:1 taken:3 previously:3 remains:1 describing:1 mechanism:1 reverting:1 fed:1 end:10 operation:13 incurring:1 prerequisite:2 apply:1 hierarchical:1 alternative:1 gate:2 eigen:1 original:3 top:1 remaining:1 ensure:2 angela:1 running:1 maintaining:2 tiago:1 unsuitable:1 calculating:1 concatenated:1 tensor:1 initializes:1 question:4 already:1 restored:1 rt:7 dependence:1 traditional:1 antoine:3 gradient:7 distance:2 thank:1 remedied:1 capacity:3 concatenation:1 outer:2 parametrized:1 philip:1 argue:1 reason:1 fresh:1 induction:2 toward:1 dzmitry:1 panigrahy:1 length:13 ruth:1 index:9 tijmen:1 providing:1 ratio:1 innovation:1 setup:1 potentially:1 negative:1 implementation:6 policy:1 perform:6 contributed:1 allowing:2 sebastien:1 teh:1 arya:1 benchmark:1 daan:1 november:1 truncated:1 hinton:2 incorporated:1 excluding:1 rn:3 brenden:1 csr:1 introduced:2 david:2 pair:1 required:2 namely:1 eluded:1 acoustic:1 learned:1 distinction:2 deletion:1 established:2 barcelona:1 nip:2 address:1 able:11 impair:1 below:1 pattern:1 sparsity:2 reading:1 program:1 rf:3 max:1 memory:121 including:1 analogue:1 power:1 hot:1 difficulty:4 natural:2 restore:1 hybrid:1 indicator:2 curriculum:10 scheme:9 naive:1 text:2 prior:3 characterises:1 acknowledgement:1 graf:4 asymptotic:2 loss:1 fully:2 lecture:1 sublinear:2 interesting:1 limitation:1 inspects:1 proven:1 versus:2 querying:1 geoffrey:2 validation:3 abdel:1 agent:1 thresholding:1 principle:1 netanyahu:1 storing:1 prasanna:1 bordes:3 translation:2 row:2 surprisingly:1 copy:10 keeping:1 senior:1 allow:2 understand:1 neighbor:7 wide:1 taking:1 sparse:35 benefit:1 van:1 curve:2 dimension:1 world:4 author:1 collection:1 forward:9 made:1 avoided:1 reinforcement:2 agnieszka:1 far:1 yori:1 transaction:1 approximate:5 keep:1 gregwayne:1 fergus:1 agapiou:1 search:5 sk:1 promising:2 learn:10 nature:1 decoupling:1 forest:5 expansion:1 investigated:2 complex:2 cl:1 domain:1 did:2 joulin:1 dense:15 linearly:1 abraham:1 s2:1 verifies:1 child:1 augmented:4 fails:1 wish:1 explicit:1 exponential:1 answering:3 ib:4 learns:2 xt:1 specific:1 experimented:1 naively:1 workshop:1 effectively:2 magnitude:2 locality:3 backpropagate:1 suited:1 timothy:2 simply:1 visual:1 vinyals:1 contained:1 doubling:1 monotonic:1 applies:1 tieleman:1 chance:3 relies:1 acm:2 succeed:1 slot:2 grefenstette:2 weston:4 presentation:1 ann:11 towards:1 room:1 content:13 specifically:3 except:1 operates:1 wt:1 total:1 pas:10 experimental:1 indicating:2 select:1 colmenarejo:1 jonathan:1 alexander:1 oriol:1 malcolm:2 scratch:1 |
5,856 | 6,299 | Breaking the Bandwidth Barrier:
Geometrical Adaptive Entropy Estimation
Weihao Gao?, Sewoong Oh?, and Pramod Viswanath?
University of Illinois at Urbana-Champaign
Urbana, IL 61801
{wgao9,swoh,pramodv}@illinois.edu
Abstract
Estimators of information theoretic measures such as entropy and mutual information are a basic workhorse for many downstream applications in modern data
science. State of the art approaches have been either geometric (nearest neighbor
(NN) based) or kernel based (with a globally chosen bandwidth). In this paper, we
combine both these approaches to design new estimators of entropy and mutual
information that outperform state of the art methods. Our estimator uses local
bandwidth choices of k-NN distances with a finite k, independent of the sample
size. Such a local and data dependent choice improves performance in practice, but
the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show
that the asymptotic bias of the proposed estimator is universal; it is independent of
the underlying distribution. Hence, it can be precomputed and subtracted from the
estimate. As a byproduct, we obtain a unified way of obtaining both kernel and
NN estimators. The corresponding theoretical contribution relating the asymptotic
geometry of nearest neighbors to order statistics is of independent mathematical
interest.
1
Introduction
Unsupervised representation learning is one of the major themes of modern data science; a common
theme among the various approaches is to extract maximally ?informative" features via informationtheoretic metrics (entropy, mutual information and their variations) ? the primary reason for the
popularity of information theoretic measures is that they are invariant to one-to-one transformations
and that they obey natural axioms such as data processing. Such an approach is evident in many
applications, as varied as computational biology [11], sociology [20] and information retrieval [17],
with the citations representing a mere smattering of recent works. Within mainstream machine
learning, a systematic effort at unsupervised clustering and hierarchical information extraction is
conducted in recent works of [25, 23]. The basic workhorse in all these methods is the computation
of mutual information (pairwise and multivariate) from i.i.d. samples. Indeed, sample-efficient
estimation of mutual information emerges as the central scientific question of interest in a variety
of applications, and is also of fundamental interest to statistics, machine learning and information
theory communities.
While these estimation questions have been studied in the past three decades (and summarized in [28]),
the renewed importance of estimating information theoretic measures in a sample-efficient manner
is persuasively argued in a recent work [2], where the authors note that existing estimators perform
poorly in several key scenarios of central interest (especially when the high dimensional random
variables are strongly related to each other). The most common estimators (featured in scientific
?
?
Coordinated Science Lab and Department of Electrical and Computer Engineering
Coordinated Science Lab and Department of Industrial and Enterprise Systems Engineering
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
software packages) are nonparametric and involve k nearest neighbor (NN) distances between the
samples. The widely used estimator of mutual information is the one by Kraskov and St?gbauer and
Grassberger [10] and christened the KSG estimator (nomenclature based on the authors, cf. [2]) ?
while this estimator works well in practice (and performs much better than other approaches such as
those based on kernel density estimation procedures), it still suffers in high dimensions. The basic
issue is that the KSG estimator (and the underlying differential entropy estimator based on nearest
neighbor distances by Kozachenko and Leonenko (KL) [9]) does not take advantage of the fact that
the samples could lie in a smaller dimensional subspace (more generally, manifold) despite the high
dimensionality of the data itself. Such lower dimensional structures effectively act as boundaries,
causing the estimator to suffer from what is known as boundary biases.
Ameliorating this deficiency is the central theme of recent works [3, 2, 16], each of which aims
to improve upon the classical KL (differential) entropy estimator of [9]. A local SVD is used
to heuristically improve the density estimate at each sample point in [2], while a local Gaussian
density (with empirical mean and covariance weighted by NN distances) is heuristically used for
the same purpose in [16]. Both these approaches, while inspired and intuitive, come with no
theoretical guarantees (even consistency) and from a practical perspective involve delicate choice
of key hyper parameters. An effort towards a systematic study is initiated in [3] which connects the
aforementioned heuristic efforts of [2, 16] to the local log-likelihood density estimation methods
[6, 15] from theoretical statistics.
The local density estimation method is a strong generalization of the traditional kernel density
estimation methods, but requires a delicate normalization which necessitates the solution of certain
integral equations (cf. Equation (9) of [15]). Indeed, such an elaborate numerical effort is one of the
key impediments for the entropy estimator of [3] to be practically valuable. A second key impediment
is that theoretical guarantees (such as consistency) can only be provided when the bandwidth is chosen
globally (leading to poor sample complexity in practice) and consistency requires the bandwidth h
to be chosen such that nhd ? ? and h ? 0, where n is the sample size and d is the dimension
of the random variable of interest. More generally, it appears that a systematic application of local
log-likelihood methods to estimate functionals of the unknown density from i.i.d. samples is missing
in the theoretical statistics literature (despite local log-likelihood methods for regression and density
estimation being standard textbook fare [29, 14]). We resolve each of these deficiencies in this paper
by undertaking a comprehensive study of estimating the (differential) entropy and mutual information
from i.i.d. samples using sample dependent bandwidth choices (typically fixed k-NN distances). This
effort allows us to connect disparate threads of ideas from seemingly different arenas: NN methods,
local log-likelihood methods, asymptotic order statistics and sample-dependent heuristic, but inspired,
methods for mutual information estimation suggested in the work of [10].
Main Results: We make the following contributions.
1. Density estimation: Parameterizing the log density by a polynomial of degree p, we derive
simple closed form expressions for the local log-likelihood maximization problem for the
cases of p ? 2 for arbitrary dimensions, with Gaussian kernel choices. This derivation, posed
as an exercise in [14, Exercise 5.2], significantly improves the computational efficiency
upon similar endeavors in the recent efforts of [3, 16, 26].
2. Entropy estimation: Using resubstitution of the local density estimate, we derive a simple
closed form estimator of the entropy using a sample dependent bandwidth choice (of k-NN
distance, where k is a fixed small integer independent of the sample size): this estimator
outperforms state of the art entropy estimators in a variety of settings. Since the bandwidth
is data dependent and vanishes too fast (because k is fixed), the estimator has a bias, which
we derive a closed form expression for and show that it is independent of the underlying
distribution and hence can be easily corrected: this is our main theoretical contribution, and
involves new theorems on asymptotic statistics of nearest neighbors generalizing classical
work in probability theory [19], which might be of independent mathematical interest.
3. Generalized view: We show that seemingly very different approaches to entropy estimation
? recent works of [2, 3, 16] and the classical work of fixed k-NN estimator of Kozachenko and
Leonenko [9] ? can all be cast in the local log-likelihood framework as specific kernel and
sample dependent bandwidth choices. This allows for a unified view, which we theoretically
justify by showing that resubstitution entropy estimation for any kernel choice using fixed
k-NN distances as bandwidth involves a bias term that is independent of the underlying
2
distribution (but depends on the specific choice of kernel and parametric density family).
Thus our work is a strict mathematical generalization of the classical work of [9].
4. Mutual Information estimation: The inspired work of [10] constructs a mutual information
estimator that subtly altered (in a sample dependent way) the three KL entropy estimation
terms, leading to superior empirical performance. We show that the underlying idea behind
this change can be incorporated in our framework as well, leading to a novel mutual
information estimator that combines the two ideas and outperforms state of the art estimators
in a variety of settings.
In the rest of this paper we describe these main results, the sections organized in roughly the same
order as the enumerated list.
2
Local likelihood density estimation (LLDE)
Given n i.i.d. samples X1 , . . . , Xn , estimating the unknown density fX (?) in Rd is a very basic
statistical task. Local likelihood density estimators [15, 6] constitute state of the art and are specified
by a weight function K : Rd ? R (also called a kernel), a degree p ? Z+ of the polynomial
approximation, and the bandwidth h ? R, and maximizes the local log-likelihood:
Z
n
X
Xj ? x
u?x
Lx (f ) =
K
log f (Xj ) ? n K
f (u) du ,
(1)
h
h
j=1
where maximization is over an exponential polynomial family, locally approximating f (u) near x:
loge fa,x (u) = a0 + ha1 , u ? xi + hu ? x, a2 (u ? x)i + ? ? ? + ap [u ? x, u ? x, . . . , u ? x] , (2)
2
p
parameterized by a = (a0 , . . . , ap ) ? R1?d?d ?????d , where h?, ?i denotes the inner-product and
ap [u, . . . , u] the p-th order tensor projection. The local likelihood density estimate (LLDE) is defined
as fbn (x) = fba(x),x (x) = eba0 (x) , where b
a(x) ? arg maxa Lx (fa,x ). The maximizer is represented
by a series of nonlinear equations, and does not have a closed form in general. We present below a
few choices of the degrees and the weight functions that admit closed form solutions. Concretely, for
p = 0, it is known that LDDE reduces to the standard Kernel Density Estimator (KDE) [15]:
Z
n
1X
x ? Xi .
u?x
fbn (x) =
K
K
du .
(3)
n i=1
h
h
If we choose the step function K(u) = I(kuk ? 1) with a local and data-dependent choice of the
bandwidth h = ?k,x where ?k,x is the k-NN distance from x, then the above estimator recovers the
popular k-NN density estimate as a special case, namely, for Cd = ? d/2 /?(d/2 + 1),
Pn
1
k
i=1 I(kXi ? xk ? ?k,x )
n
b
fn (x) =
.
(4)
=
d
Vol{u ? R : ku ? xk ? ?k,x }
n Cd ?dk,x
For higher degree local likelihood, we provide simple closed form solutions and provide a proof
in Section D. Somewhat surprisingly, this result has eluded prior works [16, 26] and [3] which
specifically attempted the evaluation for p = 2. Part of the subtlety in the result is to critically
use the fact that the parametric family (eg., the polynomial family in (2)) need not be normalized
themselves; the local log-likelihood maximization ensures that the resulting density estimate is
correctly normalized so that it integrates to 1.
Proposition 2.1. [14, Exercise 5.2] For a degree p ? {1, 2}, the maximizer of local likelihood (1)
admits a closed form solution, when using the Gaussian kernel K(u) = e?
1 1
S0
2
b
fn (x) =
exp ?
kS1 k
,
2 S02
n(2?)d/2 hd
kuk2
2
. In case of p = 1,
(5)
where S0 ? R and S1 ? Rd are defined for given x ? Rd and h ? R as
S0 ?
n
X
j=1
e?
kXj ?xk2
2h2
,
S1 ?
n
X
kXj ?xk2
1
(Xj ? x) e? 2h2 .
h
j=1
3
(6)
In case of p = 2, for S0 and S1 defined as above,
fbn (x)
=
n 1 1
o
S0
T ?1
exp
?
S
?
S
,
1
2 S02 1
n(2?)d/2 hd |?|1/2
(7)
where |?| is the determinant and S2 ? Rd?d and ? ? Rd?d are defined as
S2 ?
n
X
kXj ?xk2
1
T ? 2h2
(X
?
x)(X
?
x)
,
e
j
j
h2
j=1
??
S0 S2 ? S1 S1T
,
S02
(8)
where it follows from Cauchy-Schwarz that ? is positive semidefinite.
One of the major drawbacks of the KDE and k-NN methods is the increased bias near the boundaries.
LLDE provides a principled approach to automatically correct for the boundary bias, which takes
effect only for p ? 2 [6, 21]. This explains the performance improvement for p = 2 in the figure
below (left panel), and the gap increases with the correlation as boundary effect becomes more
prominent. We use the proposed estimators with p ? {0, 1, 2} to estimate the mutual information
between two jointly Gaussian random variables with correlation r, from n = 500 samples, using
resubstitution methods explained in the next sections. Each point is averaged over 100 instances.
In the right panel, we generate i.i.d. samples from a 2-dimensional Gaussian with correlation 0.9, and
found local approximation fb(u ? x? ) around x? denoted by the blue ? in the center. Standard k-NN
approach fits a uniform distribution over a circle enclosing k = 20 nearest neighbors (red circle). The
green lines are the contours of the degree-2 polynomial approximation with bandwidth h = ?20,x .
The figure illustrates that k-NN method suffers from boundary effect, where it underestimates the
probability by over estimating the volume in (4). However, degree-2 LDDE is able to correctly
capture the local structure of the pdf, correcting for boundary biases.
Despite the advantages of the LLDE, it requires the bandwidth to be data independent and vanishingly
small (sublinearly in sample size) for consistency almost everywhere ? both of these are impediments
to practical use since there is no obvious systematic way of choosing these hyperparameters. On the
other hand, if we restrict our focus to functionals of the density, then both these issues are resolved:
this is the focus of the next section where we show that the bandwidth can be chosen to be based on
fixed k-NN distances and the resulting universal bias easily corrected.
10
p=0
p=1
p=2
b 2]
E[(I ? I)
1
X1
0.1
0.01
0.001
0.000001
0.0001
0.001
1
(1 ? r) where r is correlation
X2
Figure 1: The boundary bias becomes less significant and the gap closes as correlation decreases for
estimating the mutual information (left). Local approximation around the blue ? in the center. The
degree-2 local likelihood approximation (contours in green) automatically captures the local structure
whereas the standard k-NN approach (uniform distribution in red circle) fails (left).
3
k-LNN Entropy Estimator
Pn
b
We consider resubstitution entropy estimators of the form H(x)
= ?(1/n) i=1 log fbn (Xi ) and
propose to use the local likelihood density estimator in (7) and a choice of bandwidth that is local
4
(varying for each point x) and adaptive (based on the data). Concretely, we choose, for each sample
point Xi , the bandwidth hXi to be the the distance to its k-th nearest neighbor ?k,i . Precisely, we
propose the following k-Local Nearest Neighbor (k-LNN) entropy estimator of degree-2:
(
)
n
X
1
S
1
1
0,i
(n)
?1
T
b
H
log
?
? Bk,d , (9)
kLNN (X) = ?
2 S1,i ?i S1,i
n i=1
2 S0,i
n(2?)d/2 ?dk,i |?i |1/2
where subtracting Bk,d defined in Theorem 1 removes the asymptotic bias, and k ? Z+ is the only
hyper parameter determining the bandwidth. In practice k is a small integer fixed to be in the range
4 ? 8. We only use the dlog ne nearest subset of samples Ti = {j ? [n] : j 6= i and kXi ? Xj k ?
?dlog ne,i } in computing the quantities below:
S0,i ?
X
?
e
kXj ?Xi k2
2?2
k,i
,
S1,i ?
j?Ti,m
j?Ti,m
S2,i ?
X
j?Ti,m
X
?
1
(Xj ? Xi )(Xj ? Xi )T e
2
?k,i
kXj ?Xi k2
2?2
k,i
?
1
(Xj ? Xi )e
?k,i
, ?i ?
kXj ?Xi k2
2?2
k,i
,
T
S0,i S2,i ? S1,i S1,i
.
2
S0,i
(10)
The truncation is important for computational efficiency, but the analysis works as long as m =
O(n1/(2d)?? ) for any positive ? that can be arbitrarily small. For a larger m, for example of ?(n),
those neighbors that are further away have a different asymptotic behavior. We show in Theorem 1
that the asymptotic bias is independent of the underlying distribution and hence can be precomputed
and removed, under mild conditions on a twice continuously differentiable pdf f (x) (cf. Lemma 3.1
below).
Theorem 1. For k ? 3 and X1 , X2 , . . . , Xn ? Rd are i.i.d. samples from a twice continuously
differentiable pdf f (x), then
(n)
b
lim E[H
kLNN (X)]
n??
= H(X) ,
(11)
where Bk,d in (9) is a constant that only depends on k and d. Further, if E[(log f (X))2 ] < ? then
b (n) (X)] = O((log n)2 /n).
the variance of the proposed estimator is bounded by Var[H
kLNN
This proves the L1 and L2 consistency of the k-LNN estimator; we relegate the proof to Section F
for ease of reading the main part of the paper. The proof assumes Ansatz 1 (also stated in Section F),
which states that a certain exchange of limit holds. As noted in [18], such an assumption is common
in the literature on consistency of k-NN estimators, where it has been implicitly assumed in existing
analyses of entropy estimators including [9, 5, 12, 27], without explicitly stating that such assumptions
are being made. Our choice of a local adaptive bandwidth hXi = ?k,i is crucial in ensuring that
the asymptotic bias Bk,d does not depend on the underlying distribution f (x). This relies on a
fundamental connection to the theory of asymptotic order statistics made precise in Lemma 3.1,
which also gives the explicit formula for the bias below.
The main idea is that the empirical quantities used in the estimate (10) converge in large n limit to
similar quantities defined over order statistics. We make this intuition precise in the next section.
We define order statistics over i.i.d. standard exponential random variables E1 , E2 , . . . , Em and i.i.d.
random variables ?1 , ?2 , . . . , ?m drawn uniformly (the Haar measure) over the unit sphere in Rd , for
a variable m ? Z+ . We define for ? ? {0, 1, 2},
(
)
Pj
Pj
m
?
X
( `=1 E` )2
(?) (
(m)
`=1 E` )
?
S? ?
?j
exp ? Pk
,
(12)
Pk
( `=1 E` )?
2( `=1 E` )2
j=1
(0)
(1)
(2)
(m)
where ?j = 1, ?j = ?j ? Rd , and ?j = ?j ?jT ? Rd?d , and let S?? = limm?? S?? and
e = (1/S?0 )2 (S?0 S?2 ? S?1 S?T ). We show that the limiting S?? ?s are well-defined (in the proof of
?
1
Theorem 1) and are directly related to the bias terms in the resubstitution estimator of entropy:
Bk,d = E[ log(
k
X
`=1
E` ) +
d
1
1
e ?1 S?1 ) ] .
log 2? ? log Cd ? log S?0 + log e
? + ( 2 S?1T ?
?
2
2
2S0
5
(13)
In practice, we propose using a fixed small k such as five. For k ? 3 the estimator has a very large
variance, and numerical evaluation of the corresponding bias also converges slowly. For some typical
choices of k, we provide approximate evaluations below, where 0.0183(?6) indicates empirical mean
? = 183 ? 10?4 with confidence interval 6 ? 10?4 . In these numerical evaluations, we truncated
the summation at m = 50, 000. Although we prove that Bk,d converges in m, in practice, one can
choose m based on the number of samples and Bk,d can be evaluated for that m.
Theoretical contribution: Our key technical innovation is a fundamental connection between nearest
neighbor statistics and asymptotic order statistics, stated below as Lemma 3.1: we show that the
(normalized) distances ?`,i ?s jointly converge to the standardized uniform order statistics and the
directions (Xj` ? Xi )/kXj` ? Xi k?s converge to independent uniform distribution (Haar measure)
over the unit sphere.
k
d
1
2
4
-0.0183(?6)
-0.1023(?5)
5
-0.0233(?6)
-0.0765(?4)
6
-0.0220(?4)
-0.0628(?4)
7
-0.0200(?4)
-0.0528(?3)
8
-0.0181(?4)
-0.0448(?3)
9
-0.0171(?3)
-0.0401(?3)
Table 1: Numerical evaluation of Bk,d , via sampling 1, 000, 000 instances for each pair (k, d).
Conditioned on Xi = x, the proposed estimator uses nearest neighbor statistics on Z`,i ? Xj` ? x
where Xj` is the `-th nearest neighbor from x such that Z`,i = ((Xj` ? Xi )/kXj` ? Xi k)?`,i .
Naturally, all the techniques we develop in this paper generalize to any estimators that depend on the
nearest neighbor statistics {Z`,i }i,`?[n] ? and the value of such a general result is demonstrated later
(in Section 4) when we evaluate the bias in similarly inspired entropy estimators [2, 3, 16, 9].
Lemma 3.1. Let E1 , E2 , . . . , Em be i.i.d. standard exponential random variables and ?1 , ?2 , . . . , ?m
be i.i.d. random variables drawn uniformly over the unit (d ? 1)-dimensional sphere in d dimensions,
independent of the Ei ?s. Suppose f is twice continuously differentiable and x ? Rd satisfies that
there exists ? > 0 such that f (a) > 0, k?f (a)k = O(1) and kHf (a)k = O(1) for any ka ? xk < ?.
Then for any m = O(log n), we have the following convergence conditioned on Xi = x:
m
X
1/d
lim dTV ((cd nf (x))1/d ( Z1,i , . . . , Zm,i ) , ( ?1 E1 , . . . , ?m (
E` )1/d )) = 0 .
(14)
n??
`=1
where dTV (?, ?) is the total variation and cd is the volume of unit Euclidean ball in Rd .
Empirical contribution: Numerical experiments suggest that the proposed estimator outperforms
state-of-the-art entropy estimators, and the gap increases with correlation. The idea of using k-NN
distance as bandwidth for entropy estimation was originally proposed by Kozachenko and Leonenko
in [9], and is a special case of the k-LNN method we propose with degree 0 and a step kernel. We
refer to Section 4 for a formal comparison. Another popular resubstitution entropy estimator is to
use KDE in (3) [7], which is a special case of the k-LNN method with degree 0, and the Gaussian
kernel is used in simulations. As comparison, we also study a new estimator [8] based on von Mises
expansion (as opposed to simple re-substitution) which has an improved convergence rate in the large
sample regime. We relegate simulation results to Section. B in the appendix.
4
Universality of the k-LNN approach
In this section, we show that Theorem 1 holds universally for a general family of entropy estimators,
specified by the choice of k ? Z+ , degree p ? Z+ , and a kernel K : Rd ? R, thus allowing a unified
view of several seemingly disparate entropy estimators [9, 2, 3, 16]. The template of the entropy
estimator is the following: given n i.i.d. samples, we first compute the local density estimate by
maximizing the local likelihood (1) with bandwidth ?k,i , and then resubstitute it to estimate entropy:
b (n) (X) = ?(1/n) Pn log fbn (Xi ).
H
i=1
k,p,K
Theorem 2. For the family of estimators described above, under the hypotheses of Theorem 1, if
the solution to the maximization b
a(x) = arg maxa Lx (fa,x ) exists for all x ? {X1 , . . . , Xn }, then
for any choice of k ? p + 1, p ? Z+ , and K : Rd ? R, the asymptotic bias is independent of the
underlying distribution:
b (n) (X)] = H(X) + B
ek,p,K,d ,
lim E[H
(15)
n??
k,p,K
6
ek,d,p,K that only depends on k, p, K and d.
for some constant B
We provide a proof in Section G. Although in general there is no simple analytical characterization of
ek,p,K,d it can be readily numerically computed: since B
ek,p,K,d is independent
the asymptotic bias B
of the underlying distribution, one can run the estimator over i.i.d. samples from any distribution and
numerically approximate the bias for any choice of the parameters. However, when the maximization
b
a(x) = arg maxa Lx (fa,x ) admits a closed form solution, as is the case with proposed k-LNN, then
ek,p,K,d can be characterized explicitly in terms of uniform order statistics.
B
This family of estimators is general: for instance, the popular KL estimator is a special case with
p = 0 and a step kernel K(u) = I(kuk ? 1). [9] showed (in a remarkable result at the time)
that the asymptotic bias is independent of the dimension d and can be computed exactly to be
log n ? ?(n) + ?(k) ? log k and ?(k) is the digamma function defined as ?(x) = ??1 (x)d?(x)/dx.
The dimension independent nature of this asymptotic bias term (of O(n?1/2 ) for d = 1 in [24,
Theorem 1] and O(n?1/d ) for general d in [4]) is special to the choice of p = 0 and the step kernel;
we explain this in detail in Section G, later in the paper. Analogously, the estimator in [2] can be
viewed as a special case with p = 0 and an ellipsoidal step kernel.
5
k-LNN Mutual information estimator
b KL , mutual information can be estimated: Ib3KL = H
b KL (X) +
Given an entropy estimator H
b
b
b
HKL (Y ) ? HKL (X, Y ). In [10], Kraskov and St?gbauer and Grassberger introduced IKSG (X; Y )
by coupling the choices of the bandwidths. The joint entropy is estimated in the usual way, but for the
marginal entropy, instead of using kNN distances from {Xj }, the bandwidth hXi = ?k,i (X, Y ) is
chosen, which is the k nearest neighbor distance from (Xi , Yi ) for the joint data {(Xj , Yj )}. Consider
b kLNN (X) + H
b kLNN (Y ) ? H
b kLNN (X, Y ). Inspired by [10], we introduce the
Ib3LNN (X; Y ) = H
following novel mutual information estimator we denote by IbLNN?KSG (X; Y ). where for the joint
(X, Y ) we use the LNN entropy estimator we proposed in (9), and for the marginal entropy we
use the bandwidth hXi = ?k,i (X, Y ) coupled to the joint estimator. Empirically, we observe IbKSG
outperforms Ib3KL everywhere, validating the use of correlated bandwidths. However, the performance
of IbLNN?KSG is similar to Ib3LNN ?sometimes better and sometimes worse.
Empirical Contribution: Numerical experiments show that for most regimes of correlation, both
3LNN and LNN-KSG outperforms other state-of-the-art estimators, and the gap increases with
correlation r. In the large sample limit, all estimators find the correct mutual information, but both
LNN and LNN-KSG are significantly more robust compared to other approaches. Mutual information
estimators have been recently proposed in [2, 3, 16] based on local likelihood maximization. However,
they involve heuristic choices of hyper-parameters or solving elaborate optimization and numerical
integrations, which are far from being easy to implement. Simulation results can be found in
Section. C in the appendix.
6
Breaking the bandwidth barrier
While k-NN distance based bandwidth are routine in practical usage [21], the main finding of this work
is that they also turn out to be the ?correct" mathematical choice for theR purpose of asymptotically
unbiased estimation of an integral functional such as the entropy: ? f (x) log f (x); we briefly
discuss the ramifications below. Traditionally, when the goal is to estimate f (x), it is well known
that the bandwidth should satisfy h ? 0 and nhd ? ?, for KDEs to be consistent. As a rule of
thumb, h = 1.06b
? n?1/5 is suggested when d = 1 where ?
b is the sample standard deviation [29,
Chapter 6.3]. On the other hand, when estimating entropy, as well as other integral functionals, it is
Pn
known that resubstitution estimators of the form ?(1/n) i=1 log fb(Xi ) achieve variances scaling
as O(1/n) independent of the bandwidth [13]. This allows for a bandwidth as small as O(n?1/d ).
The bottleneck in choosing such a small bandwidth is the bias, scaling as O(h2 +(nhd )?1 +En ) [13],
where the lower order dependence on n, dubbed En , is generally not known. The barrier in choosing a
global bandwidth of h = O(n?1/d ) is the strictly positive bias whose value depends on the unknown
distribution and cannot be subtracted off. However, perhaps surprisingly, the proposed local and
7
adaptive choice of the k-NN distance admits an asymptotic bias that is independent of the unknown
underlying distribution. Manually subtracting off the non-vanishing bias gives an asymptotically
unbiased estimator, with a potentially faster convergence as numerically compared below. Figure 2
illustrates how k-NN based bandwidth significantly improves upon, say a rule-of-thumb choice of
O(n?1/(d+4) ) explained above and another choice of O(n?1/(d+2) ). In the left figure, we use the
setting from Figure 3 (right) but with correlation r = 0.999. On the right, we generate X ? N (0, 1)
and U from uniform [0, 0.01] and let Y = X + U and estimate I(X; Y ). Following recent advances
in [12, 22], the proposed local estimator has a potential to be extended to, for example, Renyi entropy,
but with a multiplicative bias as opposed to additive.
12
12
1x10
1x1010
kNN bandwidth
Fixed bandwidth N-1/(d+4)
Fixed bandwidth N-1/(d+2)
1x1010
1x108
E[(Ib ? I)2 ]
E[(Ib ? I)2 ]
1x10
kNN bandwidth
Fixed bandwidth N-1/(d+4)
Fixed bandwidth N-1/(d+2)
6
1x10
10000
100
1x108
1x106
10000
100
1
1
0.01
0.01
0.0001
0.0001
100
200
400
100
800
number of samples n
200
400
800
number of samples n
Figure 2: Local and adaptive bandwidth significantly improves over rule-of-thumb fixed bandwidth.
Acknowledgement
This work is supported by NSF SaTC award CNS-1527754, NSF CISE award CCF-1553452, NSF
CISE award CCF-1617745. We thank the anonymous reviewers for their constructive feedback.
References
[1] G. Biau and L. Devroye. Lectures on the Nearest Neighbor Method. Springer, 2016.
[2] S. Gao, G. Ver Steeg, and A. Galstyan. Efficient estimation of mutual information for strongly dependent
variables. International Conference on Artificial Intelligence and Statistics (AISTATS), 2015.
[3] S. Gao, G. Ver Steeg, and A. Galstyan. Estimating mutual information by local gaussian approximation.
31st Conference on Uncertainty in Artificial Intelligence (UAI), 2015.
[4] W. Gao, S. Oh, and P. Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv
preprint arXiv:1604.03006, 2016.
[5] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy
estimators and its applications in testing statistical hypotheses. Nonparametric Statistics, 17(3):277?297,
2005.
[6] N. Hjort and M. Jones. Locally parametric nonparametric density estimation. The Annals of Statistics,
pages 1619?1647, 1996.
[7] H. Joe. Estimation of entropy and other functionals of a multivariate density. Annals of the Institute of
Statistical Mathematics, 41(4):683?697, 1989.
[8] K. Kandasamy, A. Krishnamurthy, B. Poczos, and L. Wasserman. Nonparametric von mises estimators for
entropies, divergences and mutual informations. In NIPS, pages 397?405, 2015.
[9] L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector. Problemy
Peredachi Informatsii, 23(2):9?16, 1987.
[10] A. Kraskov, H. St?gbauer, and P. Grassberger. Estimating mutual information. Physical review E,
69(6):066138, 2004.
8
[11] S. Krishnaswamy, M. Spitzer, M. Mingueneau, S. Bendall, O. Litvin, E. Stone, D. Peer, and G. Nolan.
Conditional density-based analysis of t cell signaling in single-cell data. Science, 346:1250689, 2014.
[12] N. Leonenko, L. Pronzato, and V. Savani. A class of r?nyi information estimators for multidimensional
densities. The Annals of Statistics, 36(5):2153?2182, 2008.
[13] H. Liu, L. Wasserman, and J. D. Lafferty. Exponential concentration for mutual information estimation
with application to forests. In NIPS, pages 2537?2545, 2012.
[14] C. Loader. Local regression and likelihood. Springer Science & Business Media, 2006.
[15] C. R. Loader. Local likelihood density estimation. The Annals of Statistics, 24(4):1602?1618, 1996.
[16] D. Lombardi and S. Pant. Nonparametric k-nearest-neighbor entropy estimator. Physical Review E,
93(1):013310, 2016.
[17] C. D. Manning, P. Raghavan, and H. Sch?tze. Introduction to information retrieval, volume 1. Cambridge
university press Cambridge, 2008.
[18] D. P?l, B. P?czos, and C. Szepesv?ri. Estimation of r?nyi entropy and mutual information based on
generalized nearest-neighbor graphs. In Advances in Neural Information Processing Systems, pages
1849?1857, 2010.
[19] R-D Reiss. Approximate distributions of order statistics: with applications to nonparametric statistics.
Springer Science & Business Media, 2012.
[20] D. Reshef, Y. Reshef, H. Finucane, S. Grossman, G. McVean, P. Turnbaugh, E. Lander, M. Mitzenmacher,
and P. Sabeti. Detecting novel associations in large data sets. science, 334(6062):1518?1524, 2011.
[21] S. J. Sheather. Density estimation. Statistical Science, 19(4):588?597, 2004.
[22] S. Singh and B. P?czos. Analysis of k-nearest neighbor distances with application to entropy estimation.
arXiv preprint arXiv:1603.08578, 2016.
[23] G. Ver Steeg and A. Galstyan. The information sieve. to appear in ICML, arXiv:1507.02284, 2016.
[24] A. B. Tsybakov and E. C. Van der Meulen. Root-n consistent estimators of entropy for densities with
unbounded support. Scandinavian Journal of Statistics, pages 75?83, 1996.
[25] G. Ver Steeg and A. Galstyan. Discovering structure in high-dimensional data through correlation
explanation. In Advances in Neural Information Processing Systems, pages 577?585, 2014.
[26] P. Vincent and Y. Bengio. Locally weighted full covariance gaussian density estimation. Technical report,
Technical report 1240, 2003.
[27] Q. Wang, S. R. Kulkarni, and S. Verd?. Divergence estimation for multidimensional densities via-nearestneighbor distances. Information Theory, IEEE Transactions on, 55(5):2392?2405, 2009.
[28] Q. Wang, S. R. Kulkarni, and S. Verd?. Universal estimation of information measures for analog sources.
Foundations and Trends in Communications and Information Theory, 5(3):265?353, 2009.
[29] L. Wasserman. All of nonparametric statistics. Springer Science & Business Media, 2006.
9
| 6299 |@word mild:1 determinant:1 briefly:1 polynomial:5 reshef:2 heuristically:2 hu:1 simulation:3 covariance:2 substitution:1 series:1 liu:1 renewed:1 past:1 existing:2 outperforms:5 ka:1 universality:1 dx:1 readily:1 grassberger:3 fn:2 numerical:7 additive:1 informative:1 remove:1 intelligence:2 kandasamy:1 discovering:1 xk:3 vanishing:3 provides:1 characterization:1 detecting:1 lx:4 five:1 unbounded:1 mathematical:4 enterprise:1 differential:3 prove:1 combine:2 introduce:1 manner:1 theoretically:1 pairwise:1 sublinearly:1 indeed:2 behavior:1 themselves:1 roughly:1 inspired:5 globally:2 automatically:2 resolve:1 becomes:2 spain:1 estimating:8 underlying:10 provided:1 maximizes:1 fba:1 panel:2 bounded:1 what:1 spitzer:1 medium:3 textbook:1 maxa:3 unified:3 finding:1 transformation:1 dubbed:1 guarantee:2 nf:1 act:1 ti:4 multidimensional:2 pramod:1 exactly:1 k2:3 unit:4 appear:1 positive:3 engineering:2 local:38 limit:3 despite:3 initiated:1 loader:2 ap:3 might:1 twice:3 studied:1 nearestneighbor:1 ease:1 range:1 averaged:1 savani:1 practical:3 yj:1 testing:1 practice:6 implement:1 signaling:1 procedure:1 featured:1 universal:3 axiom:1 empirical:6 significantly:4 projection:1 confidence:1 suggest:1 cannot:1 close:1 demonstrated:1 missing:1 center:2 maximizing:1 reviewer:1 demystifying:1 correcting:1 wasserman:3 estimator:69 parameterizing:1 rule:3 oh:2 hd:2 lnn:13 variation:2 fx:1 traditionally:1 limiting:1 annals:4 krishnamurthy:1 suppose:1 us:2 verd:2 hypothesis:2 trend:1 viswanath:2 preprint:2 electrical:1 capture:2 wang:2 ensures:1 decrease:1 removed:1 valuable:1 principled:1 intuition:1 vanishes:1 complexity:1 pramodv:1 depend:2 solving:1 singh:1 subtly:1 upon:3 efficiency:2 necessitates:1 easily:2 kxj:8 resolved:1 joint:4 various:1 represented:1 chapter:1 derivation:1 fast:2 describe:1 artificial:2 hyper:3 choosing:3 peer:1 whose:1 heuristic:3 widely:1 posed:1 larger:1 say:1 nolan:1 statistic:24 knn:3 jointly:2 itself:1 seemingly:3 advantage:2 differentiable:3 lombardi:1 analytical:1 propose:4 subtracting:2 product:1 vanishingly:1 zm:1 galstyan:4 causing:1 ramification:1 poorly:1 achieve:1 intuitive:1 convergence:3 r1:1 converges:2 derive:3 develop:1 stating:1 coupling:1 nearest:19 strong:1 involves:2 come:1 direction:1 undertaking:1 drawback:1 correct:3 weihao:1 raghavan:1 explains:1 argued:1 exchange:1 generalization:2 anonymous:1 proposition:1 summation:1 enumerated:1 strictly:1 hold:2 practically:1 around:2 exp:3 major:2 a2:1 xk2:3 purpose:2 estimation:29 integrates:1 schwarz:1 weighted:2 gaussian:8 aim:1 satc:1 pn:4 varying:1 focus:2 improvement:1 likelihood:19 indicates:1 industrial:1 digamma:1 problemy:1 dependent:9 nn:22 typically:1 a0:2 limm:1 arg:3 issue:2 among:1 aforementioned:1 denoted:1 gbauer:3 art:7 special:6 integration:1 s1t:1 mutual:24 marginal:2 construct:1 extraction:1 sampling:1 manually:1 kdes:1 biology:1 jones:1 unsupervised:2 novi:1 icml:1 report:2 few:1 modern:2 divergence:2 comprehensive:1 geometry:1 connects:1 cns:1 delicate:2 n1:1 interest:6 arena:1 evaluation:5 semidefinite:1 behind:1 integral:3 byproduct:1 euclidean:1 loge:1 circle:3 re:1 theoretical:7 sociology:1 increased:1 instance:3 maximization:6 deviation:1 subset:1 uniform:6 x108:2 conducted:1 too:1 connect:1 kxi:2 st:4 density:31 fundamental:3 international:1 systematic:4 off:2 ansatz:1 analogously:1 continuously:3 fbn:5 central:3 von:2 opposed:2 choose:3 slowly:1 worse:1 admit:1 ek:5 leading:4 grossman:1 potential:1 summarized:1 coordinated:2 satisfy:1 explicitly:2 depends:4 later:2 view:3 multiplicative:1 lab:2 closed:8 root:1 red:2 contribution:6 il:1 variance:3 biau:1 generalize:1 thumb:3 vincent:1 critically:1 mere:1 explain:1 suffers:2 turnbaugh:1 underestimate:1 obvious:1 e2:2 naturally:1 proof:5 mi:2 recovers:1 x106:1 popular:3 lim:3 emerges:1 improves:4 dimensionality:1 organized:1 routine:1 appears:1 higher:1 originally:1 maximally:1 improved:1 evaluated:1 mitzenmacher:1 strongly:2 sabeti:1 correlation:10 hand:2 ei:1 nonlinear:1 maximizer:2 perhaps:1 scientific:2 usage:1 effect:3 normalized:3 unbiased:2 ccf:2 hence:3 sieve:1 eg:1 noted:1 generalized:2 prominent:1 stone:1 pdf:3 evident:1 theoretic:3 workhorse:2 performs:1 l1:1 geometrical:1 novel:3 recently:1 common:3 superior:1 functional:1 empirically:1 physical:2 volume:3 association:1 fare:1 analog:1 relating:1 numerically:3 significant:1 refer:1 cambridge:2 rd:14 swoh:1 consistency:6 mathematics:1 similarly:1 illinois:2 hxi:4 scandinavian:1 mainstream:1 krishnaswamy:1 multivariate:2 resubstitution:7 recent:7 showed:1 perspective:1 scenario:1 certain:2 arbitrarily:1 ameliorating:1 yi:1 der:1 somewhat:1 converge:3 full:1 reduces:1 x10:3 champaign:1 technical:3 faster:1 characterized:1 bendall:1 long:1 retrieval:2 sphere:3 e1:3 award:3 ensuring:1 basic:4 regression:2 metric:1 arxiv:5 kernel:17 normalization:1 sometimes:2 cell:2 whereas:1 szepesv:1 interval:1 lander:1 source:1 crucial:1 sch:1 rest:1 strict:1 validating:1 lafferty:1 sheather:1 integer:2 near:2 kraskov:3 hjort:1 bengio:1 easy:1 mergel:1 variety:3 xj:13 fit:1 dtv:2 bandwidth:42 restrict:1 impediment:3 inner:1 idea:5 bottleneck:1 thread:1 expression:2 effort:6 suffer:1 nomenclature:1 poczos:1 constitute:1 generally:3 involve:3 nonparametric:7 tsybakov:1 ellipsoidal:1 locally:3 generate:2 inverardi:1 outperform:1 nsf:3 estimated:2 popularity:1 correctly:2 blue:2 vol:1 key:5 drawn:2 goria:1 pj:2 kuk:2 asymptotically:2 graph:1 downstream:1 mingueneau:1 run:1 package:1 parameterized:1 everywhere:2 uncertainty:1 family:7 almost:1 appendix:2 scaling:2 pronzato:1 precisely:1 deficiency:2 informatsii:1 x2:2 software:1 ri:1 leonenko:6 department:2 ball:1 poor:1 manning:1 smaller:1 finucane:1 em:2 ks1:1 s1:9 explained:2 invariant:1 dlog:2 equation:3 turn:1 precomputed:2 discus:1 klnn:6 obey:1 hierarchical:1 away:1 kozachenko:4 observe:1 subtracted:2 denotes:1 clustering:1 cf:3 assumes:1 standardized:1 especially:1 prof:1 approximating:1 classical:4 nyi:2 tensor:1 question:2 quantity:3 parametric:3 primary:1 fa:4 usual:1 traditional:1 dependence:1 concentration:1 subspace:1 distance:18 thank:1 manifold:1 cauchy:1 reason:1 devroye:1 innovation:1 potentially:1 kde:3 stated:2 disparate:2 design:1 enclosing:1 unknown:4 perform:1 allowing:1 urbana:2 finite:1 truncated:1 extended:1 incorporated:1 precise:2 communication:1 varied:1 arbitrary:1 community:1 bk:8 introduced:1 cast:1 namely:1 kl:6 specified:2 connection:2 pair:1 z1:1 eluded:1 barcelona:1 nip:3 ther:1 able:1 suggested:2 below:9 regime:2 reading:1 hkl:2 green:2 including:1 explanation:1 natural:1 business:3 haar:2 representing:1 improve:2 altered:1 meulen:1 ne:2 extract:1 coupled:1 prior:1 geometric:1 literature:2 l2:1 acknowledgement:1 review:2 determining:1 asymptotic:15 ksg:6 lecture:1 var:1 remarkable:1 h2:5 foundation:1 degree:12 consistent:2 s0:11 mcvean:1 sewoong:1 cd:5 pant:1 surprisingly:2 supported:1 truncation:1 czos:2 bias:27 formal:1 institute:1 neighbor:19 template:1 barrier:3 peredachi:1 ha1:1 boundary:8 dimension:6 xn:3 feedback:1 van:1 contour:2 fb:2 author:2 concretely:2 adaptive:5 made:2 universally:1 far:1 transaction:1 functionals:4 citation:1 approximate:3 informationtheoretic:1 implicitly:1 global:1 nhd:3 ver:4 uai:1 assumed:1 xi:19 decade:1 table:1 ku:1 nature:1 robust:1 obtaining:1 forest:1 du:2 expansion:1 aistats:1 pk:2 main:6 s2:5 steeg:4 hyperparameters:1 x1:4 en:2 elaborate:2 fails:1 theme:3 explicit:1 exponential:4 exercise:3 lie:1 breaking:2 ib:2 renyi:1 theorem:9 kuk2:1 formula:1 specific:2 jt:1 showing:1 list:1 dk:2 admits:3 exists:2 joe:1 effectively:1 importance:1 litvin:1 illustrates:2 conditioned:2 gap:4 entropy:43 wgao9:1 generalizing:1 tze:1 relegate:2 gao:4 subtlety:1 springer:4 satisfies:1 relies:1 conditional:1 viewed:1 endeavor:1 goal:1 towards:1 cise:2 change:1 specifically:1 typical:1 corrected:2 uniformly:2 justify:1 lemma:4 called:1 total:1 khf:1 svd:1 attempted:1 s02:3 support:1 kulkarni:2 constructive:1 evaluate:1 reiss:1 correlated:1 |
5,857 | 63 | 1
CONNECTIVITY VERSUS ENTROPY
Yaser S. Abu-Mostafa
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
How does the connectivity of a neural network (number of synapses per
neuron) relate to the complexity of the problems it can handle (measured by
the entropy)? Switching theory would suggest no relation at all, since all Boolean
functions can be implemented using a circuit with very low connectivity (e.g.,
using two-input NAND gates). However, for a network that learns a problem
from examples using a local learning rule, we prove that the entropy of the
problem becomes a lower bound for the connectivity of the network.
INTRODUCTION
The most distinguishing feature of neural networks is their ability to spontaneously learn the desired function from 'training' samples, i.e., their ability
to program themselves. Clearly, a given neural network cannot just learn any
function, there must be some restrictions on which networks can learn which
functions. One obvious restriction, which is independent of the learning aspect,
is that the network must be big enough to accommodate the circuit complexity of the function it will eventually simulate. Are there restrictions that arise
merely from the fact that the network is expected to learn the function, rather
than being purposely designed for the function? This paper reports a restriction
of this kind.
The result imposes a lower bound on the connectivity of the network (number of synapses per neuron). This lower bound can only be a consequence of
the learning aspect, since switching theory provides purposely designed circuits
of low connectivity (e.g., using only two-input NAND gates) capable of implementing any Boolean function [1,2] . It also follows that the learning mechanism
must be restricted for this lower bound to hold; a powerful mechanism can be
? American Institute of Physics 1988
2
designed that will find one of the low-connectivity circuits (perhaps byexhaustive search), and hence the lower bound on connectivity cannot hold in general.
Indeed, we restrict the learning mechanism to be local; when a training sample
is loaded into the network, each neuron has access only to those bits carried by
itself and the neurons it is directly connected to. This is a strong assumption
that excludes sophisticated learning mechanisms used in neural-network models,
but may be more plausible from a biological point of view.
The lower bound on the connectivity of the network is given in terms of
the entropy of the environment that provides the training samples. Entropy is a
quantitative measure of the disorder or randomness in an environment or, equivalently, the amount of information needed to specify the environment. There
are many different ways to define entropy, and many technical variations of this
concept [3]. In the next section, we shall introduce the formal definitions and
results, but we start here with an informal exposition of the ideas involved.
The environment in our model produces patterns represented by N bits
x = Xl ??? X N (pixels in the picture of a visual scene if you will). Only h different
patterns can be generated by a given environment, where h < 2N (the entropy
is essentially log2 h). No knowledge is assumed about which patterns the environment is likely to generate, only that there are h of them. In the learning
process, a huge number of sample patterns are generated at random from the
environment and input to the network, one bit per neuron. The network uses
this information to set its internal parameters and gradually tune itself to this
particular environment. Because of the network architecture, each neuron knows
only its own bit and (at best) the bits of the neurons it is directly connected to
by a synapse. Hence, the learning rules are local: a neuron does not have the
benefit of the entire global pattern that is being learned.
After the learning process has taken place, each neuron is ready to perform
a function defined by what it has learned. The collective interaction of the
functions of the neurons is what defines the overall function of the network. The
main result of this paper is that (roughly speaking) if the connectivity of the
network is less than the entropy of the environment, the network cannot learn
about the environment. The idea of the proof is to show that if the connectivity
is small, the final function of each neuron is independent of the environment,
and hence to conclude that the overall network has accumulated no information
about the environment it is supposed to learn about.
FORMAL RESULT
A neural network is an undirected graph (the vertices are the neurons and the
edges are the synapses). Label the neurons 1"", N and define Kn C {I"", N}
to be the set of neurons connected by a synapse to neuron n, together with
neuron n itself. An environment is a subset e C {O,I}N (each x E e is a sample
3
from the environment). During learning, Xl,"', xN (the bits of x) are loaded
into the neurons 1"", N, respectively. Consider an arbitrary neuron nand
relabel everything to make Kn become {I"", K}. Thus the neuron sees the
first K coordinates of each x.
Since our result is asymptotic in N, we will specify K as a function of N;
K = a.N where a. = a.(N) satifies limN-+oo a.(N) = 0.0 (0 < 0.0 < 1). Since the
result is also statistical, we will consider the ensemble of environments
e
e=e(N)={eC{O,I}N
I lel=h}
where h = 2~N and /3 = /3(N) satifies limN-+oo /3(N) = /30 (0 < /30 < 1). The
probability distribution on e is uniform; any environment e E e is as likely to
occur as any other.
The neuron sees only the first K coordinates of each x generated by the
environment e. For each e, we define the function n : {O,I}K -+ {O, 1,2,??.}
where
n(al" .aK) = I{x Eel Xle = ale for k = 1,'" ,K}I
and the normalized version
The function v describes the relative frequency of occurrence for each of the 2K
binary vectors Xl'" XK as x = Xl ??? XN runs through all h vectors in e. In other
words, v specifies the projection of e as seen by the neuron. Clearly, veal > 0
for all a E {O,l}K and LaE{O,l}K veal = 1.
Corresponding to two environments el and e2, we will have two functions VI
and V2. IT VI is not distinguishable from V2, the neuron cannot tell the difference
between el and e2' The distinguishability between VI and V2 can be measured
by
1
d(Vl,V2) = - 2: IV1(a) - V2(a) I
2 aE{O,l}K
The range of d(Vb V2) is 0 < d(Vl' V2) < 1, where '0' corresponds to complete
indistinguishability while '1' corresponds to maximum distinguishability. We
are now in a position to state the main result.
Let el and e2 be independently selected environments from according to the
uniform probability distribution. d(Vl' V2) is now a random variable, and we are
interested in the expected value E(d(Vl' V2))' The case where E(d(Vb V2)) = 0
corresponds to the neuron getting no information about the environment, while
the case where E(d(Vb V2)) = 1 corresponds to the neuron getting maximum
information. The theorem predicts, in the limit, one of these extremes depending
on how the connectivity (0. 0) compares to the entropy (/30)'
e
4
Theorem.
1. H Q o > Po
2.
, then limN..... co E (d(VI, V2))
H Q o < Po , then limN..... co E (d(v}, V2))
= 1.
= O.
The proof is given in the appendix, but the idea is easy to illustrate informally. Suppose h = 2 K + 10 (corresponding to part 2 of the theorem). For most
environments e E
the first K bits of x E e go through all 2K possible val10
ues approximately 2 times each as x goes through all h possible values once.
Therefore, the patterns seen by the neuron are drawn from the fixed ensemble of
all binary vectors of length K with essentially uniform probability distribution,
i.e., v is the same for most environments. This means that, statistically, the
neuron will end up doing the same function regardless of the environment at
hand.
What about the opposite case, where h = 2K - 10 (corresponding to part lof
the theorem)? Now, with only 2K - 10 patterns available from the environment,
the first K bits of x can assume at most 2K - 10 values out of the possible 2K
values a binary vector of length K can assume in principle. Furthermore, which
values can be assumed depends on the particular environment at hand, i.e.,
v does depend on the environment. Therefore, although the neuron still does
not have the global picture, the information it has says something about the
environment.
e,
ACKNOWLEDGEMENT
This work was supported by the Air Force Office of Scientific Research under
Grant AFOSR-86-0296.
APPENDIX
In this appendix we prove the main theorem. We start by discussing some
e.
basic properties about the ensemble of environments
Since the probability
we have
distribution on e is uniform and since Ie I =
e:),
2N)-1
Pr(e) = ( h
which is equivalent to generating e by choosing h elements x E {O,l}N with
uniform probability (without replacement). It follows that
Pr(x E e)
h
= 2N
5
Pr(Xl E e , X2 E e)
h
= 2N
h-l
X 2N _
1
and so on.
The functions n and v are defined on K-bit vectors. The statistics of n(a)
(a random variable for fixed a) is independent of a
Pr(n(at}
= m) = Pr(n(a2) = m)
which follows from the symmetry with respect to each bit of a. The same holds
for the statistics of v(a). The expected value E(n(a)) = h2- K (h objects going
into 2K cells), hence E(v(a)) = 2- K . We now restate and prove the theorem.
Theorem.
1. If a o > Po , then limN_oo E (d(vt, V2))
2. If a o < Po , then limN_oo E (d(vt, V2))
= 1.
= 0.
Proof.
We expand E (d(vt, V2)) as follows
where nl and n2 denote nl(O. ??0) and n2(0?? ?0), respectively, and the last step
follows from the fact that the statistics of nl(a) and n2(a) is independent of a.
Therefore, to prove the theorem, we evaluate E(lnl - n21) for large N.
1. Assume a o > Po. Let n denote n(O??? 0), and consider Pr(n = 0). For n to
be zero, all 2N - K strings x of N bits starting with K O's must not be in the
environment e. Hence
Pr(n
h
= 0) = (1 - ) (1 2N
h
2N - 1
) ... (1 -
where the first term is the probability that 0? . ?00
h
2N - 2N - K
f/. e,
+ 1)
the second term is the
6
probability that O? .. 01
~ f
given that
o? .. 00 ~ f,
and so on.
>
(1-
=
(1- h2- N(1- 2- K)-1) 2
2N
_h2N _ K
)'N-K
N- K
> (1 - 2h2- N)2N - K
> 1- 2h2- N 2N - K
= 1- 2h2- K
Hence, Pr(nl = 0) = Pr(n2
E( n2) = h2- K . Therefore,
E(lnl - n2\)
= 0) = Pr(n = 0) > 1 -
" "
= LLPr(nl
2h2- K
?
However, E(nl) =
= i,n2 = j)li - jl
i=O;=O
= L"
L" Pr(nl = i)Pr(n2 = j) Ii -
jl
i=O;=O
> L" Pr(nl
;=0
= 0)Pr(n2 =
j)j
+ L" Pr(nl = i)Pr(n2 = O)i
i=O
which follows by throwing away all the terms where neither i nor j is zero (the
term where both i an j are zero appears twice for convenience, but this term is
zero anyway).
= Pr(nl = 0)E(n2) + Pr(n2 = O)E(nl)
> 2(1 - 2h2- K )h2- K
Substituting this estimate in the expression for E(d(Vb V2)), we get
E(d(vl, V2))
=
2K
2h E(lnl - n21)
2K
x 2(1 - 2h2- K )h2- K
- 2h
= 1- 2h2- K
>-
=1-
2
X 2(,8-a)N
Since a o > 130 by assumption, this lower bound goes to 1 as N goes to infinity.
Since 1 is also an upper bound for d( VI, V2) (and hence an upper bound for the
expected value E(d(vl, V2))) , limN_oo E(d(vl, V2)) must be 1.
7
2. Assume a o <
Po.
Consider
E(lnl - n21)
(I(nl - h2- K ) - (n2 - h2- K )I)
E(\nl - h2- K \ + In2 - h2- K I)
=E
<
= E(\nl = 2E(ln -
h2- K I) + E(ln2 - h2- K I)
h2- K I)
To evaluate E(ln - h2- K I), we estimate the variance of n and use the fact
that E(ln - h2- K I) < ..jvar(n) (recall that h2- K = E(n?). Since var(n) =
E(n 2) - (E(n))2, we need an estimate for E(n 2). We write n = E.E{O,l}N-K 6.,
where
6 - { 1 , if 0 .. ?Oa E e?,
? 0, otherwise.
In this notation, E(n 2 ) can be written as
E(n 2)
=E
(I:
I:
.E{O,l}N-K bE{O,l}N-K
I:
L
6.6t,)
E(6.6t,)
.E{O,l}N-K bE{O,l}N-K
For the 'diagonal' terms (a = b),
E(6.6.)
= Pr(6. = 1)
= h2- N
There are 2 N - K such diagonal terms, hence a total contribution of 2 N h2- N = h2- K to the sum. For the 'off-diagonal' terms (a '# b),
E(6.6b )
K
x
= Pr( 6. = 1,6b = 1)
= Pr(6. = 1)Pr(6b = 116. = 1)
h
h-l
=-x--::-:::-2N
2N_1
There are 2 N - K (2 N - K -1) such off-diagonal terms, hence a total contribution of
2N - K (2 N - K -1) x 2;~:N~1) < (h2-K)2 2~~1 to the sum. Putting the contributions
8
from the diagonal and off-diagonal terms together, we get
2N
E(n 2) < h2- K + (h2-K)2 2N _ 1
var(n)
= E(n 2) -
(E(n))2
< (h2- K + (h2- K )'
2:: 1) - (h2-
K )'
1
= h2- K + (h2 - K )2----:-:-_
2N -1
h2- K )
= h2- K ( 1 + ---:-:-2N -1
< 2h2- K
The last step follows since h2- K is much smaller than 2N -1. Therefore, E(ln1
h2- K I) < vvar(n) < (2h2- K )?i. Substituting this estimate in the expression for
E( d( Vb V2)), we get
2K
E(d(vb V2)) = 2h E(lnl - n21)
2K
< 2h x 2E(ln - h2- K I)
2K
1
< 2h x 2 x (2h2-K)?i
-_ ( 22K)
- ~
h
= v'2 X 2~(Q-~)N
Since a o < Po by assumption, this upper bound goes to 0 as N goes to infinity.
Since 0 is also a lower bound for d(vb V2) (and hence a lower bound for the
expected value E(d(vb V2))), limN_oo E(d(vb V2)) must be O. ?
REFERENCES
[1] Y. Abu-Mostafa, "Neural networks for computing?," AlP Conference Proceedings # 151, Neural Networks for Computing, J. Denker (ed.), pp. 1-6, 1986.
[2] Z. Kohavi, Switching and Finite Automata Theory, McGraw-Hill, 1978.
[3] Y. Abu-Mostafa, "The complexity of information extraction," IEEE Trans.
on Information Theory, vol. IT-32, pp. 513-525, July 1986.
[4] Y. Abu-Mostafa, "Complexity in neural systems," in Analog VLSI and Neural
Systems by C. Mead, Addison-Wesley, 1988.
| 63 |@word implemented:1 concept:1 version:1 normalized:1 hence:10 restate:1 diagonal:6 alp:1 during:1 implementing:1 everything:1 accommodate:1 oa:1 ln2:1 hill:1 biological:1 complete:1 hold:3 length:2 lof:1 purposely:2 must:6 written:1 equivalently:1 mostafa:4 substituting:2 iv1:1 relate:1 a2:1 veal:2 designed:3 jl:2 analog:1 collective:1 selected:1 label:1 perform:1 upper:3 neuron:27 xk:1 finite:1 provides:2 clearly:2 rather:1 access:1 arbitrary:1 become:1 office:1 something:1 own:1 prove:4 xle:1 introduce:1 california:1 learned:2 indeed:1 expected:5 roughly:1 themselves:1 nor:1 binary:3 discussing:1 vt:3 el:3 seen:2 accumulated:1 vl:7 entire:1 distinguishability:2 nand:3 pattern:7 pasadena:1 relation:1 vlsi:1 expand:1 going:1 becomes:1 interested:1 ale:1 notation:1 pixel:1 circuit:4 overall:2 ii:1 july:1 what:3 kind:1 technical:1 string:1 force:1 ca:1 once:1 extraction:1 technology:1 quantitative:1 picture:2 basic:1 carried:1 relabel:1 essentially:2 ae:1 ready:1 ues:1 report:1 indistinguishability:1 mcgraw:1 grant:1 cell:1 acknowledgement:1 h2n:1 asymptotic:1 relative:1 local:3 lae:1 afosr:1 limit:1 consequence:1 switching:3 replacement:1 limn:4 ak:1 kohavi:1 mead:1 versus:1 var:2 approximately:1 huge:1 h2:41 undirected:1 twice:1 imposes:1 principle:1 co:2 extreme:1 nl:14 range:1 statistically:1 enough:1 easy:1 supported:1 spontaneously:1 last:2 edge:1 capable:1 architecture:1 restrict:1 opposite:1 idea:3 formal:2 institute:2 desired:1 expression:2 benefit:1 projection:1 xn:2 word:1 lnl:5 boolean:2 suggest:1 yaser:1 get:3 cannot:4 convenience:1 speaking:1 program:1 ec:1 vertex:1 subset:1 restriction:4 equivalent:1 uniform:5 limn_oo:4 informally:1 tune:1 amount:1 go:6 regardless:1 starting:1 independently:1 automaton:1 global:2 kn:2 generate:1 disorder:1 specifies:1 assumed:2 conclude:1 rule:2 search:1 ie:1 per:3 learn:6 physic:1 handle:1 anyway:1 coordinate:2 variation:1 eel:1 off:3 together:2 write:1 suppose:1 connectivity:12 shall:1 vol:1 abu:4 n21:4 distinguishing:1 us:1 putting:1 drawn:1 neither:1 element:1 main:3 american:1 big:1 arise:1 graph:1 excludes:1 li:1 predicts:1 merely:1 sum:2 n2:13 run:1 powerful:1 you:1 place:1 connected:3 vi:5 depends:1 position:1 view:1 appendix:3 vb:9 doing:1 environment:29 bit:11 complexity:4 start:2 bound:12 xl:5 learns:1 perhaps:1 theorem:8 depend:1 contribution:3 air:1 occur:1 infinity:2 loaded:2 variance:1 throwing:1 ensemble:3 scene:1 x2:1 po:7 aspect:2 simulate:1 represented:1 randomness:1 according:1 tell:1 synapsis:3 entropy:9 choosing:1 distinguishable:1 ed:1 describes:1 definition:1 smaller:1 likely:2 plausible:1 visual:1 say:1 frequency:1 otherwise:1 involved:1 pp:2 ability:2 statistic:3 obvious:1 e2:3 proof:3 satifies:2 restricted:1 itself:3 gradually:1 final:1 pr:22 taken:1 ln:4 trans:1 recall:1 knowledge:1 corresponds:4 eventually:1 mechanism:4 ln1:1 interaction:1 needed:1 sophisticated:1 know:1 addison:1 exposition:1 appears:1 wesley:1 end:1 informal:1 available:1 specify:2 denker:1 synapse:2 supposed:1 v2:26 away:1 total:2 furthermore:1 just:1 getting:2 occurrence:1 hand:2 gate:2 produce:1 generating:1 in2:1 symmetry:1 internal:1 object:1 oo:2 depending:1 illustrate:1 defines:1 log2:1 measured:2 evaluate:2 scientific:1 lel:1 strong:1 |
5,858 | 630 | Attractor Neural Networks with Local
Inhibition: from Statistical Physics to a
Digital Programmable Integrated Circuit
E. Pasero
Dipartimento di Elettronica
Politecnico di Torino
1-10129 Torino, Italy
R. Zecchina
Dipartimento di Fisica Teorica e INFN
Universita. di Torino
1-10125 Torino, Italy
Abstract
Networks with local inhibition are shown to have enhanced computational performance with respect to the classical Hopfield-like networks. In particular the critical capacity of the network is increased
as well as its capability to store correlated patterns. Chaotic dynamic behaviour (exponentially long transients) of the devices indicates the overloading of the associative memory. An implementation based on a programmable logic device is here presented. A 16
neurons circuit is implemented whit a XILINK 4020 device. The
peculiarity of this solution is the possibility to change parts of the
project (weights, transfer function or the whole architecture) with
a simple software download of the configuration into the XILINK
chip.
1
INTRODUCTION
Attractor Neural Networks endowed with local inhibitory feedbacks, have been
shown to have interesting computational performances[I]. Past effort was concentrated in studying a variety of synaptic structures or learning algorithms, while
less attention was devoted to study the possible role played by different dynamical
schemes. The definition of relaxation dynamics is the central problem for the study
of the associative and computational capabilities in models of attractor neural networks and might be of interest also for hardware implementation in view of the
805
806
Pasero and Zecchina
constraints on the precision of the synaptic weights.
In this paper, we give a brief discussion concerning the computational and physical
role played by local inhibitory interactions which lead to an effective non-monotonic
transfer function for the neurons. In the last few years others models characterized
by non-monotonic neurons have been proposed[2,3].
For Hebbian learning we show, numerically, that the critical capacity increases with
respect to the Hopfield case and that such result can be interpreted in terms of a
twofold task realized by the dynamical process. By means of local inhibition, the
system dynamically selects a subspace (or subnetwork) of minimal static noise with
respect to the recalled pattern; at the same time, and in the selected subspace, the
retrieval of the memorized pattern is performed. The dynamic behaviour of the
network, for deterministic sequential updating, range from fixed points to chaotic
evolution, with the storage ratio as control parameter, the transition appearing
in correspondence to the collapse of the associative performance. Resorting to two
simplified versions of the model, we study the problem of their optimal performance
by the replica method; in particular the role of non-monotonic functions and of
subspaces dynamical selection are discussed.
In a second part of the work, the implementation of the discussed model by means
ofaXILINK programmable gate array is discussed. The circuit implements a 16-32
neurons network in which the analogical characteristics (such as a capacitive decay)
are emulated by digital solutions. As expected, the limited resolution of the weghts
does not represent a limit for the performance of the network.
2
THE MODEL: theory and performance
We study an attractor neural network composed of N three state ?1,O formal
neurons. The ? 1 values code for the patterns (the patterns are indeed binary) and
are thus used during the learning phase, while the O-state is a don't care state, not
belonging to the patterns code, which has only a dynamical role. The system is
assumed to be fully connected and its evolution is governed by sequential or parallel
updating of the following equations
(1)
N
hi(t + 1) = >'hi(t) +
L JijSj(t)
i
= l, ... ,N
(2)
j=l
where,), is a dynamic threshold of the local inhibitory feedback (typically we take
2:i Ihi(t - 1)1), the {Jij} are the synaptic conductances and>' is a capacitive decay factor of the input potential (>. e~, where T
RC).
')'(t) =
fr
=
=
The performance of the network are described in terms of two parameters which
have both a dynamical and a computational simple interpretation. In particular we
define the retrieval activity as the fraction of neurons which are not not in the
zero state
Attractor Neural Networks with Local Inhibition
(3)
while the parameter that defines the retrieval quality is the scaled overlap
m
{er =
=
p. _
-
tP.S
N1 ""'
i?
a ~,"i
.,
(4)
=
where the
?1, i
1, N; J.L 1, P} are the memorized binary patterns. The
scaled overlap can be thought simply as the overlap computed in the subspace M
of the active neurons, M
{i / Si # 0, i = 1, N}.
=
{er},
Given a set of P random independent binary patterns
the Hebb-Hopfield
learning rule corresponds to fix the synaptic matrix Jij by the additive relation
p
Jij
=~L
ere: (with
Jii
= 0).
The effect of the dynamical process defined by
p.=1
(1) and (2) is the selection of subspaces M of active neurons in which the static
noise is minimized (such subspaces will be hereafter referred to as orthogonal subspaces). Before entering in the description of the results, it is worthwhile to remember that, in Hopfield-like attractor neural networks, the mean of cross correlation
Huctuations produce in the local fields of the neurons a static noise, referred to as
cross-talk of the memories. Together with temporal correlations, the static noise is
responsible of the phase transition of the neural networks from associative memory
to spin-glass. More pTecisely, when the Hopfield model is in a fixed point elF which
= 1 + Rf where
belongs to the set of memories, the local fields are given by
hier
Rf = ~
L L er e: er t; is the static noise (gaussian distribution with 0 mean and
variance
va).
P.?lF j?i
The preliminary performance study of the model under discussion have revealed
several new basic features, in particular: (i) the critical capacity, for the Hebb
learning rule, results increased up to Q c ::::; 0.33 (instead of 0.14[4]); (ii) the mean
cross correlation Huctuations computed in the selected subspaces is minimized by
the dynamical process in the region Q < Q c ; (iii) in correspondence to the associative
transition the system goes through a dynamic transition from fixed points to chaotic
trajectories.
The quantitative results concerning associative performance, are obtained by means
of extended simulations. A typical simulation takes the memorized patterns as
initial configurations and lets the system relax until it reaches a stationary point.
The quantity describing the performance of the network as an associative memory
is the mean scaled overlap m between the final stationary states and the memorized
patterns, used as initial states. As the number of memorized configurations grows,
one observes a threshold at Q = Q c ::::; 0.33 beyond which the stored states become
1000).
unstable. (numerical results were performed for networks of size up to N
We observe that since the recall of the patterns is performed with no errors (up to
=
807
808
Pasero and Zecchina
a ~ 0.31), also the number of stored bits in the synaptic matrix results increased
with respect to the Hopfield case.
The typical size of the sub-networks DM, like the network capacity, depends on the
threshold parameter l' and on the kind of updating: for 1'(t)
I:i Ihi(t -1)1 and
parallel updating we find DM ::: N/2 (a c 0.33).
=
= fv
The static noise reduction corresponds to the minimization of the mean fluctuation
of the cross correlations (cross talk) in the subspaces, defined by
where ei = 1 if i E M in pattern a and zero otherwise, as a function of a. Under
the dynamical process (1) and (2), C does not follow a statistical law but undergoes
a minimization that qualitatively explains the increase in the storage capacity. For
a < a c , once the system has relaxed in a stationary subspace, the model becomes
equivalent (in the subspace) to a Hopfield network with a static noise term which is
no longer random. The statistical mechanics of the combinatorial task of minimizing
the noise-energy term (5) can be studied analytically by the replica method; the
results are of general interest in that give an upper bound to the performance
of networks endowed with Hebb-like synaptic matrices and with the possibility
selecting optimal subnetworks for retrieval dynamics of the patterns[8].
As already stated, the behaviour of the neural network as a dynamical system is
directly related to its performance as an associative memory. The system shows
an abrupt transition in the dynamics, from fixed points to chaotic exponentially
long transients, in correspondence to the value of the storage ratio at which the
memorized configurations become unstable. The only (external) control parameter
of the model as a dynamical system is the storage ratio a = P / N. Dynamic complex
behaviour appears as a clear signal of saturation of the attractor neural network
and does not depend on the symmetry of the couplings.
As a concluding remark concerning this short description of the network performance, we observe that the dynamic selection of subspaces seems to take advantage
of finite size effects allowing the storage of correlated patterns also with the simple
Hebb rule. Analytical and numerical work is in progress on this point, devoted to
clarify the performance with spatially correlated patterns[5].
Finally, we end this theoretical section by addressing the problem of optimal performance for a different choice of the synaptic weights. In this direction, it is of
basic interest to understand whether a dynamical scheme which allows for dynamic
selection of subnetworks provides a neural network model with enhanced optimal
capacity with respect to the classical spin models. Assuming that nothing is known
about the couplings, one can consider the Jij as dynamical variables and study the
fractional volume in the space of interactions that makes the patterns fixed points
of the dynamics. Following Gardner and Derrida[6] , we describe the problem in
terms of a cost-energy function and study its statistical mechanics: for a generic
choice of the {Jij }, the cost function Ei is defined to be the number of patterns
such that a given site i is wrong (with respect to (1))
Attractor Neural Networks with Local Inhibition
p
Ei( {Jij }, {?f})
=L
[?f (e(hfer +,,) - e(hfer)) + (1 - ?f) e b 2
-
hf2)]
(6)
1-'=1
where
e
is the step function, the
hf = ~ L Jijej ?} are the local fields, " is the
vN
.
J
threshold of the inhibitory feedback and with ?f = {O, I} being the variables that
identify the subspace M (?f = 1 if i E M and zero otherwise).
In order to estimate the optimal capacity, one should perform the replica theory on
the following partition function
(7)
Since the latter task seems unmanageable, as a first step we resort to two simplified
version of the model which, separately, retain its main characteristics (subspaces
and non-monotonicity); in particular:
(i) we assume that the {?f} are quenched random variables, distributed according
to P(?f) = (1 - A)6(?f) + A6(?f - 1), A E [0,1];
(ii) we consider the case of a two-state (?1) non-monotonic transfer function.
For lack of space, here we list only the final results. The expressions of the R.S.
critical capacity for the models are, respectively:
('
Q~.s? b; A) = { 2(1 - A) Jo
D(b - ()2
1
A
+ "2
+ A"y
00
D(b - ()2
}-l
(8)
(9)
where D( =
1 ~
J7Le 2 d( (for (9) see also Ref.[4]).
V 271"
The values of critical capacity one finds are much higher than the monotonic perceptron capacity (Q c = 2). Unfortunately, the latter results are not reliable in that
the stability analysis shows that the RS solution are unstable. Replica symmetry
breaking is thus required. All the details concerning the computation with one step
in replica symmetry breaking of the critical capacity and stabilities distribution can
be found in Ref.[9]. Here we just quote the final quantitative result concerning optimal capacity for the non-monotonic two-state model: numerical evaluation of the
saddle-point equations (for unbiased patterns) ~ives Qcbopt) ~ 4.8 with "opt ~ 0.8,
the corresponding R.S. value from (9) being Q c .5 . ~ 10.5.
809
810
Pasero and Zecchina
3
HARDWARE IMPLEMENTATION: a digital
programmable integrated circuit
The performance of the network discussed in the above section points out the good
behavior of the dynamical approach. Our goal is now to investigate the performance
of this system with special hardware. Commercial neural chips[9] and[10] are not
feasible: the featu res of our net require non monotonic transfer characteristic due
to local inhibitions. These aspects are not allowed in traditional networks. The
implementation of a full custom chip is, on the other side, an hasty choice. The
model is still being studied: new developments must be expected in the next future.
Therefore we decided to build a prototype based on programmable logic circuits.
This solution allows us to implement the circuit in a short time not to the detriment of the performance. Moreover the same circuit will be easily updated to the
next evolutions. After an analysis of the existing logic circuits we decided to use
the FPGAs devices[ll]. The reasons are that we need a large quantity of internal
registers, to represent both synapses and capacitors, and the fastest interconnections. Xilinx 4000 family[12] offers us the most interesting approach: up tp 20000
gates are programmable and up to 28 K bit of Ram are available. Moreover a 3
ns propagation delay between internal blocks allow to implement very fast systems.
We decided to use a XC4020 circuit with 20000 equivalent gates. Main problems related to the implementation of our model are the following: (a) number of neurons,
(b) number of connections and (c) computation time parameters. (a) and (b) are
obviously related to the logic device we have at our disposal. The number of gates
we can use to implement the transfer function of our non monotonic neurons are
mutually exclusive with the number of bits we decide to assign to the weights. The
20000 gates must be divided between logic gates and Ram cells. The parameter
(c) depends on our choices in implementing the neural network. We can decide to
connect the logic blocks in a sequential or in a parallel way. The global propagation
time is the sum of the propagation delays of each logic block, from the the input
to the output. Therefore if we put more blocks in parallel we don't increase the
propagation delay and the time performance is better. Unfortunately the parallel
solution clashes with the limitations of available logic blocks of our device. Therefore we decided to design two chips: the first circuit can implement 16 neurons in a
faster parallell implementation and the second circuit allow us to use 32 (or more)
neurons in a slower serial approach. Here we'll describe the fastest implementation.
Figure 1 shows the 16 neurons of the neural chip. Each neuron, described in figure 2,
performs a sequential sum and multiplication of the outputs of the other 15 neurons
by the synaptic values stored inside the internal Ram. A special circuit implements
the activation function described in the previuos section. All the neurons perform
these operations in a parallel way: 15 clock pulses are sufficient to perform the
complete operation for the system. Figure 2 shows the circuit of the neuron. Tl
is a Ram where the synapses Tij are stored after the training phase. Ml and
Al perform sums and multiplications according to our model. Dl simulates the A
decay factor: every 15 clock cycles, which correspond to a complete cycle of sum and
multiplication for all the 16 neurons, this circuit decreases the input of the neuron
of a factor A. The activation function (1) is realized by Fl. Such circuit emulates
a three levels logic, based on -1, 0 and +1 values by using two full adder blocks.
Limitation due to the electrical characteristic of the circuit, impose a maximum
Attractor Neural Networks with Local Inhibition
clock cycle of 20 MHz . The 16 neurons verSlOn of the chip takes from 4 to 6
complete computations to gain stability and every computation is 16 clock cycles
long. Therefor e the network gives a stable state after 3 J.1.S at maximum. The
second version of this circuit allows to use more neurons at a lower speed. We used
the Xilinx device to implement one neuron while the synapses and the capacitors
are stored in an external fast memory. The single neuron is time multiplexed in
order to emulate a large number identical devices. At each step, both synapses
and state variables are downloaded and uploaded from an external memory. This
solution is obviously slower than the tirst one but a larger number of nerons can be
implemented. A 32 neurons version takes about 6 J.1.S to reach a stable configuration.
n(1:0)
/I I
out(LO)
n{ I :0)
/I 2
out(l :0)
n{I,:O)
/I 3
outO ?O)
OUTl'UT(2n: 1)
L--.-",,"'{I:O)
/I
/I
out(l :O)
CO/olTROtlIR
Figure 1 - Neural Chip
:
:
.
Al
.44.vU2
If
Ul,ll
U.QJ
~lUl.I!IJ
po t (I: D)
L..
r=
r- ~
1I1.1Ir-
Q (J I. 11
(0""012
loon, ?? ,
II
~(1I'811
4 ??? ",'_'
~c
&
r ... ll.1
1(11.81
-
TJ
.4 N, ." ??
8111.81
(
(
!.I-
MJ
odCI~:O)
I T-
I ...
Olle,
~~ "o12 CIlI.al
outr 1\ ;: (3)
I
DJ
..... e.l
(l,e:
I
c::: W17: 131
SOl (\ 1 :el
013 Cl 1 :81
:::: oulf III : 131
.~d.u
? ..,l' 1, OJ
=-
L3 : DJ~
lood _oD
(K
C0t"'l"'O
u.~
Figure 2 - Neuron
....... (], eJ
f"""
? ..,t (I, el
' __ at
Dou\!I:OI
811
812
Pasero and Zecchina
4
CONCLUSION
A modified approach to attractor neural networks and its implementation on a
XILINK XC4020 FPGA was discussed. The chip is now under test. Six /-L8 are
sufficient for the relaxation of the system in a stable state, and the recognition of
an input pattern is thus quite fast. A next step will be the definition of a multiple
chip system endowed with more than 32 neurons, with the weights stored in an
external fast memory.
Acknowledgements
This work was partially supported by the Annethe-INFN Italian project and by
Progetto /inalizzato sistemi informatici e calcolo parallelo of CNR under grant N.
91.00884.PF69.
References
[1] R. Zecchina, "Computational and Physical Role of Local Inhibition in Attractor Neural Networks: a Simple Model," Parallel Architectures and Neural
Networks, edt. E.R. Caianiello, World Scientific (1992).
[2] M. Morita, S. Yoshizawa, H. Nakano, "Analysis and Improvement of the Dynamics of Autocorrelated Associative Memory," IEleE Trans. J73-D-ll, 232
(1990).
[3] K. Kobayashi, "On the Capacity of a Neuron with a Non-Monotone Output
Function," Network, 2, 237 (1991).
[4] D.J. Amit, M. Gutfreund, H. Sompolinsky, "Storing Infinite Numbers of Patterns in a Spin-Glass Model of Neural Networks," Phy&. Rev. Lett., 55, 1530
(1985).
[5] G. Boifetta, N. BruneI, R. Monasson, R. Zecchina, in preparation (1993).
[6] E. Gardner, B. Deridda, "Optimal Storage Properties of Neural Network Models," J. Phys., A21, 271 (1988).
[7] G. Boifetta, R. Monasson, R. Zecchina, "Symmetry Breaking in NonMonotonic Neural Networks," in preparation (1992).
[8] N. BruneI, R. Zecchina, "Statistical Mechanics of Optimal Memory Retrieval
in the Space of Dynamic Neuronal Activities," pre print (1993).
[9] "An electrical Trainable Artificial Neural Network", proceedings of IJCNN,
1989, S. Diego.
[10] M. Dzwonczyk, M.Leblanc, "INCA: An Integrated Neurocomputing Architecture", proceedings of AlA A Computing in Aerospace, October 1991
[11] W.R. Moore, W. Luk, "FPGAs", Abingdon EE-CS Books, 1991
[12] "The XC4000 Data Book", Xilinx, 1991
| 630 |@word luk:1 version:4 seems:2 jijsj:1 simulation:2 r:1 pulse:1 reduction:1 phy:1 configuration:5 initial:2 hereafter:1 selecting:1 ala:1 past:1 existing:1 clash:1 od:1 si:1 activation:2 must:2 additive:1 numerical:3 partition:1 stationary:3 selected:2 device:8 short:2 provides:1 rc:1 become:2 inside:1 indeed:1 expected:2 behavior:1 mechanic:3 becomes:1 project:2 moreover:2 circuit:17 kind:1 interpreted:1 gutfreund:1 temporal:1 zecchina:9 remember:1 quantitative:2 every:2 scaled:3 wrong:1 control:2 grant:1 before:1 kobayashi:1 local:14 limit:1 fluctuation:1 might:1 studied:2 dynamically:1 co:1 fastest:2 collapse:1 limited:1 range:1 ihi:2 decided:4 responsible:1 block:6 implement:7 lf:1 chaotic:4 thought:1 pre:1 quenched:1 selection:4 storage:6 put:1 equivalent:2 deterministic:1 go:1 attention:1 uploaded:1 politecnico:1 resolution:1 abrupt:1 rule:3 array:1 stability:3 updated:1 enhanced:2 commercial:1 diego:1 recognition:1 updating:4 role:5 electrical:2 region:1 connected:1 cycle:4 sompolinsky:1 decrease:1 sol:1 observes:1 dynamic:13 caianiello:1 depend:1 easily:1 po:1 hopfield:7 chip:9 emulate:1 autocorrelated:1 talk:2 dou:1 lul:1 fast:4 effective:1 describe:2 artificial:1 nonmonotonic:1 quite:1 larger:1 relax:1 otherwise:2 interconnection:1 final:3 associative:9 obviously:2 advantage:1 analytical:1 net:1 leblanc:1 interaction:2 jij:6 fr:1 analogical:1 description:2 produce:1 coupling:2 derrida:1 ij:1 progress:1 implemented:2 c:1 direction:1 peculiarity:1 transient:2 memorized:6 implementing:1 explains:1 require:1 behaviour:4 assign:1 fix:1 preliminary:1 opt:1 dipartimento:2 clarify:1 combinatorial:1 quote:1 ere:1 minimization:2 gaussian:1 modified:1 ej:1 improvement:1 indicates:1 glass:2 el:2 integrated:3 typically:1 relation:1 italian:1 selects:1 i1:1 development:1 hasty:1 special:2 field:3 once:1 calcolo:1 identical:1 future:1 minimized:2 others:1 elf:1 loon:1 few:1 composed:1 neurocomputing:1 phase:3 attractor:11 n1:1 conductance:1 interest:3 brunei:2 possibility:2 investigate:1 custom:1 evaluation:1 tj:1 devoted:2 orthogonal:1 re:1 theoretical:1 minimal:1 increased:3 whit:1 inca:1 tp:2 mhz:1 a6:1 lood:1 cost:2 addressing:1 fpga:1 delay:3 stored:6 connect:1 retain:1 physic:1 together:1 infn:2 jo:1 central:1 external:4 book:2 resort:1 jii:1 potential:1 register:1 depends:2 performed:3 view:1 hf:1 capability:2 parallel:7 oi:1 spin:3 ir:1 variance:1 characteristic:4 emulates:1 correspond:1 identify:1 emulated:1 trajectory:1 synapsis:4 reach:2 phys:1 synaptic:8 definition:2 energy:2 hf2:1 dm:2 yoshizawa:1 di:4 static:7 gain:1 recall:1 fractional:1 ut:1 appears:1 disposal:1 higher:1 follow:1 just:1 correlation:4 until:1 clock:4 adder:1 ei:3 hier:1 lack:1 propagation:4 defines:1 undergoes:1 quality:1 scientific:1 grows:1 effect:2 unbiased:1 evolution:3 analytically:1 entering:1 spatially:1 moore:1 featu:1 during:1 ll:5 xilinx:3 complete:3 performs:1 physical:2 exponentially:2 volume:1 discussed:5 interpretation:1 numerically:1 resorting:1 c0t:1 therefor:1 dj:2 l3:1 stable:3 longer:1 inhibition:8 italy:2 belongs:1 store:1 binary:3 care:1 relaxed:1 impose:1 signal:1 ii:3 full:2 multiple:1 hebbian:1 faster:1 characterized:1 cross:5 long:3 retrieval:5 offer:1 divided:1 concerning:5 serial:1 va:1 basic:2 represent:2 cell:1 separately:1 morita:1 cnr:1 simulates:1 capacitor:2 ee:1 revealed:1 iii:2 variety:1 architecture:3 prototype:1 qj:1 whether:1 expression:1 six:1 ul:1 effort:1 remark:1 programmable:6 tij:1 clear:1 concentrated:1 hardware:3 inhibitory:4 sistemi:1 olle:1 threshold:4 replica:5 ram:4 relaxation:2 monotone:1 fraction:1 year:1 sum:4 family:1 decide:2 vn:1 bit:3 bound:1 hi:2 fl:1 played:2 correspondence:3 activity:2 ijcnn:1 constraint:1 software:1 aspect:1 speed:1 concluding:1 according:2 belonging:1 rev:1 equation:2 mutually:1 describing:1 end:1 subnetworks:2 studying:1 available:2 operation:2 endowed:3 observe:2 worthwhile:1 generic:1 appearing:1 monasson:2 gate:6 slower:2 fpgas:2 capacitive:2 nakano:1 build:1 amit:1 universita:1 classical:2 torino:4 already:1 realized:2 quantity:2 print:1 exclusive:1 traditional:1 subnetwork:1 subspace:14 capacity:13 unstable:3 reason:1 assuming:1 code:2 ratio:3 minimizing:1 detriment:1 unfortunately:2 october:1 stated:1 implementation:9 design:1 perform:4 allowing:1 upper:1 neuron:28 finite:1 extended:1 edt:1 download:1 required:1 connection:1 aerospace:1 recalled:1 fv:1 trans:1 beyond:1 dynamical:13 pattern:20 saturation:1 rf:2 reliable:1 memory:11 oj:1 critical:6 overlap:4 scheme:2 brief:1 gardner:2 parallell:1 acknowledgement:1 multiplication:3 law:1 fully:1 interesting:2 limitation:2 digital:3 downloaded:1 sufficient:2 storing:1 lo:1 l8:1 outl:1 supported:1 last:1 formal:1 side:1 understand:1 perceptron:1 allow:2 unmanageable:1 distributed:1 feedback:3 lett:1 transition:5 world:1 qualitatively:1 simplified:2 logic:9 monotonicity:1 ml:1 global:1 active:2 assumed:1 don:2 mj:1 transfer:5 symmetry:4 complex:1 cl:1 main:2 whole:1 noise:8 nothing:1 allowed:1 ref:2 neuronal:1 site:1 referred:2 tl:1 hebb:4 n:1 precision:1 sub:1 a21:1 governed:1 breaking:3 er:4 list:1 decay:3 dl:1 overloading:1 sequential:4 simply:1 saddle:1 partially:1 monotonic:8 corresponds:2 ives:1 goal:1 twofold:1 feasible:1 change:1 typical:2 infinite:1 internal:3 latter:2 preparation:2 multiplexed:1 trainable:1 correlated:3 |
5,859 | 6,300 | Examples are not Enough, Learn to Criticize!
Criticism for Interpretability
Been Kim?
Allen Institute for AI
[email protected]
Rajiv Khanna
UT Austin
[email protected]
Oluwasanmi Koyejo
UIUC
[email protected]
Abstract
Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely
sufficient to represent the gist of the complexity. In order for users to construct
better mental models and understand complex data distributions, we also need
criticism to explain what are not captured by prototypes. Motivated by the Bayesian
model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot
study shows that the MMD-critic selects prototypes and criticism that are useful
to facilitate human understanding and reasoning. We also evaluate the prototypes
selected by MMD-critic via a nearest prototype classifier, showing competitive
performance compared to baselines.
1
Introduction and Related Work
As machine learning (ML) methods have become ubiquitous in human decision making, their
transparency and interpretability have grown in importance (Varshney, 2016). Interpretability is
particularity important in domains where decisions can have significant consequences. For example,
the pneumonia risk prediction case study in Caruana et al. (2015) showed that a more interpretable
model could reveal important but surprising patterns in the data that complex models overlooked.
Studies of human reasoning have shown that the use of examples (prototypes) is fundamental to the
development of effective strategies for tactical decision-making (Newell and Simon, 1972; Cohen
et al., 1996). Example-based explanations are widely used in the effort to improve interpretability.
A popular research program along these lines is case-based reasoning (CBR) (Aamodt and Plaza,
1994), which has been successfully applied to real-world problems (Bichindaritz and Marling, 2006).
More recently, the Bayesian framework has been combined with CBR-based approaches in the
unsupervised-learning setting, leading to improvements in user interpretability (Kim et al., 2014). In
a supervised learning setting, example-based classifiers have been is shown to achieve comparable
performance to non-interpretable methods, while offering a condensed view of a dataset (Bien and
Tibshirani, 2011).
However, examples are not enough. Relying only on examples to explain the models? behavior
can lead over-generalization and misunderstanding. Examples alone may be sufficient when the
distribution of data points are ?clean? ? in the sense that there exists a set of prototypical examples
which sufficiently represent the data. However, this is rarely the case in real world data. For instance,
fitting models to complex datasets often requires the use of regularization. While the regularization
adds bias to the model to improve generalization performance, this same bias may conflict with the
distribution of the data. Thus, to maintain interpretability, it is important, along with prototypical
examples, to deliver insights signifying the parts of the input space where prototypical examples
?
All authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
do not provide good explanations. We call the data points that do not quite fit the model criticism
samples. Together with prototypes, criticism can help humans build a better mental model of the
complex data space.
Bayesian model criticism (BMC) is a framework for evaluating fitted Bayesian models, and was
developed to to aid model development and selection by helping to identify where and how a particular
model may fail to explain the data. It has quickly developed into an important part of model design,
and Bayesian statisticians now view model criticism as an important component in the cycle of
model construction, inference and criticism (Gelman et al., 2014). Lloyd and Ghahramani (2015)
recently proposed an exploratory approach for statistical model criticism using the maximum mean
discrepancy (MMD) two sample test, and explored the use of the witness function to identify the
portions of the input space the model most misrepresents the data. Instead of using the MMD to
compare two models as in classic two sample testing (Gretton et al., 2008), or to compare the model
to input data as in the Bayesian model criticism of Lloyd and Ghahramani (2015), we consider a novel
application of the MMD, and its associated witness function as a principled approach for selecting
prototype and criticism samples.
We present the MMD-critic, a scalable framework for prototype and criticism selection to improve
the interpretability of machine learning methods. To our best knowledge, ours is the first work which
leverages the BMC framework to generate explanations for machine learning methods. MMD-critic
uses the MMD statistic as a measure of similarity between points and potential prototypes, and
efficiently selects prototypes that maximize the statistic. In addition to prototypes, MMD-critic
selects criticism samples i.e. samples that are not well-explained by the prototypes using a regularized
witness function score. The scalability follows from our analysis, where we show that under certain
conditions, the MMD for prototype selection is a supermodular set function. Our supermodularity
proof is general and may be of independent interest. While we are primarily concerned with prototype
selection and criticism, we quantitatively evaluate the performance of MMD-critic as a nearest
prototype classifier, and show that it achieves comparable performance to existing methods. We also
present results from a human subject pilot study which shows that including the criticism together
with prototypes is helpful for an end-task that requires the data-distributions to be well-explained.
2
Preliminaries
This section includes notation and a few important definitions. Vectors are denoted by lower case
x and matrices
P by capital X. The Euclidean inner product between matrices A and B is given by
hA, Bi =
ai,j bi,j . Let det(X) denote the determinant of X. Sets are denoted by sans serif e.g.
S. The reals are denoted by R. [n] denotes the set of integers {1, . . . , n}, and 2V denotes the power
set of V. The indicator function 1[a] takes the value of 1 if its argument a is true and is 0 otherwise.
We denote probability distributions by either P or Q. The notation | ? | will denote cardinality when
applied to sets, or absolute value when applied to real values.
2.1
Maximum Mean Discrepancy (MMD)
The maximum mean discrepancy (MMD) is a measure of the difference between distributions P
and Q, given by the suprenum over a function space F of differences between the expectations with
respect to two distributions. The MMD is given by:
?
?
MMD(F, P, Q) = sup EX?P [f (X)] EY ?Q [f (Y )] .
(1)
f 2F
When F is a reproducing kernel Hilbert space (RKHS) with kernel function k : X ? X 7! R, the
suprenum is achieved at (Gretton et al., 2008):
f (x) = EX 0 ?P [k(x, X 0 )]
EX 0 ?Q [k(x, X 0 )] .
(2)
The function (2) is also known as the witness function as it measures the maximum discrepancy
between the two expectations in F. Observe that the witness function is positive whenever Q underfits
the density of P , and negative wherever Q overfits P . We can substitute (2) into (1) and square the
result, leading to:
MMD2 (F, P, Q) = EX,X 0 ?P [k(X, X 0 )]
2EX?P,y?Q [k(X, Y )] + EY,Y 0 ?Q [k(Y, Y 0 )] .
2
(3)
It is clear that MMD2 (F, P, Q)
0 and MMD2 (F, P, Q) = 0 iff. P is indistinguishable from
Q on the RHKS F. This population definition can be approximated using sample expectations.
In particular, given n samples from P as X = {xi ? P, i 2 [n]}, and m samples from Q as
Z = {zi ? Q, i 2 [m]}, the following is a finite sample approximation:
X
1 X
2
1 X
MMD2b (F, X, Z) = 2
k(xi , xj )
k(xi , zj ) + 2
k(zi , zj ), (4)
n
nm
m
i,j2[n]
i2[n],j2[m]
and the witness function is approximated as:
1 X
f (x) =
k(x, xi )
n
1 X
k(x, zj ).
m
i2[n]
3
i,j2[m]
(5)
j2[m]
MMD-critic for Prototype Selection and Criticism
Given n samples from a statistical model X = {xi , i 2 [n]}, let S ? [n] represent a subset of the
indices, so that XS = {xi 8i 2 S}. Given a RKHS with the kernel function k(?, ?), we can measure the
maximum mean discrepancy between the samples and any selected subset using MMD2 (F, X, XS ).
MMD-critic selects prototype indices S which minimize MMD2 (F, X, XS ). For our purposes, it
will be convenient to pose the problem as a normalized discrete maximization. To this end, consider
the following cost function, given by the negation of MMD2 (F, X, XS ) with an additive bias:
Jb (S) =
=
n
1 X
k(xi , xj )
n2 i,j=1
2
n|S|
X
MMD2 (F, X, XS )
k(xi , yj )
1 X
k(yi , xj ).
|S|2
(6)
i,j2S
i2[n],j2S
Pn
Note that the additive bias MMD2 (F, X, ;) = n12 i,j=1 k(xi , xj ) is a constant with respect to S.
Further, Jb (S) is normalized, since, when evaluated on the empty set, we have that:
Jb (;) = min Jb (S) =
S22[n]
n
1 X
k(xi , xj )
n2 i,j=1
n
1 X
k(xi , xj ) = 0.
n2 i,j=1
MMD-critic selects m? prototypes as the subset of indices S ? [n] which optimize:
max
S22[n] ,|S|?m?
Jb (S).
(7)
For the purposes of optimizing the cost function (6), it will prove useful to exploit it?s linearity with
respect to the kernel entries. The following Lemma is easily shown by enumeration.
Lemma 1. Let Jb (?) be defined as in (6), then Jb (?) is a linear function of k(xi , xj ). In particular,
2
define K 2 Rn?n , with ki,j = k(xi , xj ), and A(S) 2 Rn?n with entries ai,j (S) = n|S|
1[j2S]
1
|S|2 1[i2S] 1[j2S] then: Jb (S) = hA(S), Ki.
3.1
Submodularity and Efficient Prototype Selection
While the discrete optimization problem (6) may be quite complicated to optimize, we show that the
cost function Jb (S) is monotone submodular under conditions on the kernel matrix which are often
satisfied in practice, and which can be easily checked given a kernel matrix. Based on this result, we
describe the greedy forward selection algorithm for efficient prototype selection.
Let F : 2[n] 7! R represent a set function. F is normalized if F (;) = 0. F is monotonic, if for all
subsets u ? v ? 2[n] it holds that F (U) ? F (V). F is submodular, if for all subsets U, V 2 2[n] it
holds that F (U [ V) + F (U \ V) ? F (U) + F (V). Submodular functions have a diminishing returns
property (Nemhauser et al., 1978) i.e. the marginal gain of adding elements decreases with the size
of the set. When F is submodular, F is supermodular (and vice versa).
3
We prove submodularity for a larger class of problems, then show submodularity of (6) as a special
case. Our proof for the larger class may be of independent interest. In particular, the following
Theorem considers general discrete optimization problems which are linear matrix functionals, and
shows sufficient conditions on the matrix for the problem to be monotone and/or submodular.
Theorem 2 (Monotone Submodularity for Linear Forms). Let H 2 Rn?n (not necessarily symmetric)
be element-wise non-negative and bounded, with upper bound h? = maxi,j2[n] hi,j > 0. Further,
construct the binary matrix representation of the indices that achieve the maximum as E 2 [0, 1]n?n
with ei,j = 1 if hi,j = h? and ei,j = 0 otherwise, and its complement E0 = 1 E with the
corresponding set E0 = {(i, j) s.t. ei,j = 0}. Given the ground set S ? 2[n] consider the linear form:
F (H, S) = hA(S), Hi 8 S 2 S. Given m = |S|, define the functions:
a(S [ {u}) a(S)
a(S [ {u}) + a(S [ v}) a(S [ {u, v}) a(S)
?(n, m) =
,
(n, m) =
,
b(S)
b(S [ {u, v}) + d(S)
(8)
where a(S) = F (E, S), b(S) = F (E0 , S) for all u, v 2 S (additional notation suppressed in ?(?)
and (?) for clarity). Let m? = maxS2S |S| be the maximal cardinality of any element in the ground
set.
1. If hi,j ? h? ?(n, m) 8 0 ? m ? m? , 8 (i, j) 2 E0 , then F (H, S) is monotone
2. If hi,j ? h? (n, m) 8 0 ? m ? m? , 8 (i, j) 2 E0 , then F (H, S) is submodular.
Finally, we consider a special case of Theorem 2 for the MMD.
Corollary 3 (Monotone Submodularity for MMD). Let the kernel matrix K 2 Rn?n be element-wise
non-negative, with equal diagonal terms ki,i = k? > 0 8i 2 [n], and be diagonally dominant. If the
off-diagonal terms ki,j 8 i, j 2 [n], i 6= j satisfy 0 ? ki,j ? n3 +2nk2? 2n 3 , then Jb (S) given by (6)
is monotone submodular.
The diagonal dominance condition expressed by Corollary 3 is easy to check given a kernel matrix.
We also note that the conditions can be significantly weakened if one determines the required number
of prototypes m? = max |S| ? n a-priori. This is further simplified for the MMD since the bounds
(8) are both monotonically decreasing functions of m, so the condition need only be checked for
m? . Observe that diagonal dominance is not a necessary condition, as the more general approach in
Theorem 2 allows arbitrarily indexed maximal entries in the kernel. Diagonal dominance is assumed
to simplify the resulting expressions.
Perhaps, more important to practice is our observation that the diagonal dominance condition
expressed by Corollary 3 is satisfied by parametrized kernels with appropriately selected parameters.
We provide an example for radial basis function (RBF) kernels and powers of positive standardized
kernels. Further examples and more general conditions are left for future work.
Example 4 (Radial basis function Kernel). Consider the radial basis function kernel K with entries
ki,j = k(xi , xj ) = exp( kxi xj k) evaluated on a sample X with non-duplicate points i.e.
xi 6= xj 8 xi , xj 2 X. The off-diagonal kernel entries ki,j i 6= j monotonically decrease with respect
?
to increasing . Thus, 9 ? such that Corollary 3 is satisfied for
.
Example 5 (Powers of Positive Standardized Kernels). Consider a element-wise positive kernel
matrix G standardized to be element-wise bounded 0 ? gi,j < 1 with unitary diagonal gi,i =
p
1 8 i 2 [n]. Define the kernel power K with ki,j = gi,j
. The off-diagonal kernel entries ki,j i 6= j
monotonically decrease with respect to increasing p. Thus, 9 p? such that Corollary 3 is satisfied for
p p? .
Beyond the examples outlined here, similar conditions can be enumerated for a wide range of
parametrized kernel functions, and are easily checked for model-based kernels e.g. the Fisher kernel
(Jaakkola et al., 1999) ? useful for comparing data points based on similarity with respect to a
probabilistic model. Our interpretation of from these examples is that the conditions of Corollary 3
are not excessively restrictive. While constrained maximization of submodular functions is generally
NP-hard, the simple greedy forward selection heuristic has been shown to perform almost as well as
the optimal in practice, and is known to have strong theoretical guarantees.
Theorem 6 (Nemhauser et al. (1978)). In the case of any normalized, monotonic submodular function
F , the set S? obtained by the greedy algorithm achieves at least a constant fraction 1 1e of the
objective value obtained by the optimal solution i.e. F (S? ) = 1 1e max F (s).
|S|?m
4
In addition, no polynomial time algorithm can provide a better approximation guarantee unless P
= NP (Feige, 1998). An additional benefit of the greedy approach is that it does not require the
decision of the number of prototypes m? to be made at training time, so assuming the kernel satisfies
appropriate conditions, training can be stopped at any m? based on computational constraints, while
still returning meaningful results. The greedy algorithm is outlined in Algorithm 1.
Algorithm 1 Greedy algorithm, max F (S) s.t. |S| ? m?
Input: m? , S = ;
while |S| < m? do
foreach i 2 [n]\S, fi = F (S [ i) F (S)
S = S [ {arg max fi }
end while
Return: S.
3.2
Model Criticism
In addition to selecting prototype samples, MMD-critic characterizes the data points not well
explained by the prototypes ? which we call the model criticism. These data points are selected as
the largest values of the witness function (5) i.e. where the similarity between the dataset and the
prototypes deviate the most. Consider the cost function:
L(C) =
X 1 X
k(xi , xl )
n
l2C
i2[n]
1 X
k(xj , xl ) .
m
(9)
j2S
The absolute value ensures that we measure both positive deviations f (x) > 0 where the prototypes
underfit the density of the samples, and negative deviations f (x) < 0, where the prototypes overfit
the density of the samples. Thus, we focus primarily on the magnitude of deviation, rather than its
sign. The following theorem shows that (9) is a linear function of C.
Theorem 7. The criticism function L(C) is a linear function of C.
We found that the addition of a regularizer which encourages a diverse selection of criticism points
improved performance. Let r : 2[n] 7! R represent a regularization function. We select the criticism
points as the maximizers of this cost function:
max
C?[n]\S,|C|?c?
L(C) + r(K, C)
(10)
Where [n]\S denote all indexes which not include the prototypes, and c? is the number of criticism
points desired. Fortunately, due to the linearity of (5), the optimization function (10) is submodular
when the regularization function is submodular. We encourage the use of regularizers which incorporate diversity into the criticism selection. We found the best qualitative performance using the
log-determinant regularizer (Krause et al., 2008). Let KC,C be the sub-matrix of K corresponding to
the pair of indexes in C ? C, then the log-determinant regularizer is given by:
r(K, C) = log det KC,C
(11)
which is known to be submodular. Further, several researchers have found, both in theory and practice
(Sharma et al., 2015), that greedy optimization is an effective strategy for optimization. We apply the
greedy algorithm for criticism selection with the function F (C) = L(C) + r(K, C).
4
Related Work
There is a large literature on techniques for selecting prototypes that summarize a dataset, and a full
literature survey is beyond the scope of this manuscript. Instead, we overview a few of the most
relevant references. The K-medoid clustering (Kaufman and Rousseeuw, 1987) is a classic technique
for selecting a representative subset of data points, and can be solved using various iterative algorithms.
K-medoid clustering is quite similar to K-means clustering, with the additional condition that the
presented prototypes must be in the dataset. The ubiquity of large datasets has led to resurgence
5
of interest in the data summarization problem, also known as the set cover problem. Progress has
included novel cost functions and algorithms for several domains including image summarization
(Simon et al., 2007) and document summarizauion (Lin and Bilmes, 2011). Recent innovations also
include highly scalable and distributed algorithms (Badanidiyuru et al., 2014; Mirzasoleiman et al.,
2015). There is also a large literature on variations of the set cover problem tuned for classification,
such as the cover digraph approach of (Priebe et al., 2003) and prototype selection for interpretable
classification (Bien and Tibshirani, 2011), which involves selecting prototypes that maximize the
coverage within the class, but minimize the coverage across classes.
Submodular / Supermodular functions are well studied in the combinatorial optimization literature, with several scalable algorithms that come with optimization theoretic optimality guarantees (Nemhauser et al., 1978). In the Bayesian modeling literature, submodular optimization has
previously been applied for approximate inference by Koyejo et al. (2014). The technical conditions
required for submodularity of (6) are due to averaging of the kernel similarity scores ? as the average
requires a division by the cardinality |S|. In particular, the analogue of (6) which replaces all the averages by sums (i.e. removes all division by |S|) is equivalent to the well known submodular functions
previously used
2007) and document (Lin and Bilmes, 2011) summarization,
Pfor scene (Simon et al., P
given by: n2 i2[n],j2S k(xi , yj ) +
> 0 is a regularization parameter.
i,j2S k(yi , xj ), where
The function that results is known to be submodular when the kernel is element-wise positive i.e.
without the need for additional diagonal dominance conditions. On the other hand, the averaging
has a desirable built-in balancing effect. When using the sum, practitioners must tune the additional
regularization parameter to achieve a similar balance.
5
Results
We present results for the proposed technique MMD-critic using USPS hand written digits (Hull,
1994) and Imagenet (Deng et al., 2009) datasets. We quantitatively evaluate the prototypes in terms
of predictive quality as compared to related baselines on USPS hand written digits dataset. We also
present preliminary results from a human subject pilot study. Our results suggest that the model
criticism ? which is unique to the proposed MMD-critic is especially useful to facilitate human
understanding. For all datasets, we employed the radial basis function (RBF) kernel with entries
ki,j = k(xi , xj ) = exp( kxi xj k), which satisfies the conditions of Corollary 3 for sufficiently
large (c.f. Example 4, see Example 5 and following discussion for alternative feasible kernels).
The Nearest Prototype Classifier: While our primary interest is in interpretable prototype selection and criticism, prototypes may also be useful for speeding up memory-based machine learning
techniques such as the nearest neighbor classifier by restricting the neighbor search to the prototypes,
sometimes known as the nearest prototype classifier (Bien and Tibshirani, 2011; Kuncheva and
Bezdek, 1998). This classification provides an objective (although indirect) evaluation of the quality
of the selected prototypes, and is useful for setting hyperparameters. We employ a 1 nearest neighbor
classifier using the Hilbert space distance induced by the kernels. Let yi 2 [k] denote the label
associated with each prototype i 2 S, for k classes. As we employ normalized kernels (where the
diagonal is 1), it is sufficient to measure the pairwise kernel similarity. Thus, for a test point x
?, the
nearest neighbor classifier reduces to:
y? = yi? , where i? = argmin k?
x
i2S
5.1
2
xi kHK = argmax k(?
x, xi ).
i2S
MMD-critic evaluated on USPS Digits Dataset
The USPS hand written digits dataset Hull (1994) consists of n = 7291 training (and 2007 test)
greyscale images of 10 handwritten digits from 0 to 9. We consider two kinds of RBF kernels
(i) global: where the pairwise kernel is computed between all data points, and (ii) local: given
by exp( kxi xj k)1[yi =yj ] , i.e. points in different classes are assigned a similarity score of
zero. The local approach has the effect of pushing points in different classes further apart. The
kernel hyperparameter was chosen based to maximize the average cross-validated classification
performance, then fixed for all other experiments.
Classification: We evaluated nearest prototype classifiers using MMD-critic, and compared to
baselines (and reported performance) from Bien and Tibshirani (2011) (abbreviated as PS) and their
6
0.18
MMD-global
MMD-local
PS
K-medoids
0.16
Test error
0.14
0.12
0.10
0.08
0.06
0.04
0
1000
2000
3000
Number of prototypes
4000
Figure 1: Classification error vs. number of prototypes m = |S|. MMD-critic shows comparable
(or improved) performance as compared to other models (left). Random subset of prototypes and
criticism from the USPS dataset (right).
implementation of K-medoids. Figure 1(left) compares MMD-critic with global and local kernels,
to the baselines for different numbers of selected prototypes m = |S|. Our results show comparable
(or improved) performance as compared to other models. In particular, we observe that the global
kernels out-perform the local kernels2 by a small margin. We note that MMD is particularly effective
at selecting the first few prototypes (i.e. speed of error reduction as number of prototypes increases)
suggesting its utility for rapidly summarising the dataset.
Selected Prototypes and Criticism: Fig. 1 (right) presents a randomly selected subset of the
prototypes and criticism from the MMD-critic using the local kernel. We observe that the prototypes
capture many of the common ways of writing digits, while the criticism clearly capture outliers.
5.2
Qualitative Measure: Prototypes and Criticisms of Images
In this section, we learn prototypes and criticisms from the Imagenet dataset (Russakovsky et al.,
2015) using image embeddings from He et al. (2015). Each image is represented by a 2048 dimensions
vector embedding, and each image belongs to one of 1000 categories. We select two breeds of one
category (e.g., Blenheim spaniel) and run MMD-critic to learn prototypes and criticism. As shown
in Figure 2, MMD-critic learns reasonable prototypes and criticisms for two types of dog breeds. On
the left, criticisms picked out the different coloring (second criticism is in black and white picture),
as well as pictures capturing movements of dogs (first and third criticisms). Similarly, on the right,
criticisms capture the unusual, but potentially frequent pictures of dogs in costumes (first and second
criticisms).
5.3
Quantitative measure: Prototypes and Criticisms improve interpretability
We conducted a human pilot study to collect objective and subjective measures of interpretability
using MMD-critic. The experiment used the same dataset as Section 5.2. We define ?interpretability?
in this work as the following: a method is interpretable if a user can correctly and efficiently predict
the method?s results. Under this definition, we designed a predictive task to quantitatively evaluate
the interpretability. Given a randomly sampled data point, we measure how well a human can predict
a group it belongs to (accuracy), and how fast they can perform the task (efficiency). We chose this
dataset as the task of assigning a new image to a group requires groups to be well-explained but does
not require specialized training.
We presented four conditions in the experiment. 1) raw images condition (Raw Condition) 2)
Prototypes Only (Proto Only Condition) 3) Prototypes and criticisms (Proto and Criticism Condition)
4) Uniformly sampled data points per group (Uniform Condition). Raw Condition contained 100
images per species (e.g., if a group contains 2 species, there are 200 images) Proto Only Condition,
Proto and Criticism Condition and Uniform Condition contains the same number of images.
2
Note that the local kernel trivially achieves perfect accuracy. Thus, in order to measure generalization
performance, we do not use class labels for local kernel test instances i.e. we use the global kernel instead of
local kernel for test instances ? regardless of training.
7
Figure 2: Learned prototypes and criticisms from Imagenet dataset (two types of dog breeds)
We used within-subject design to minimize the effect of inter-participant variability, with a balanced
Latin square to account for a potential learning effect. The four conditions were assigned to four
participants (four males) in a balanced manner. Each subject answered 21 questions, where the first
three questions are practice questions and not included in the analysis. Each question showed six
groups (e.g., red fox, kit fox) of a species (e.g., fox), and a randomly sampled data point that belongs
to one of the groups. Subjects were encouraged to answer the questions as quickly and accurately
as possible. A break was imposed after each question to mitigate the potential effect of fatigue. We
measured the accuracy of answers as well as the time they took to answer each question. Participants
were also asked to respond to 10 5-point Likert scale survey questions about their subjective measure
of accuracy and efficiency. Each survey question compared a pair of conditions (e.g., Condition A
was more helpful than condition B to correctly (or efficiently) assign the image to a group).
Subjects performed the best using Proto and Criticism Condition (M=87.5%, SD=20%). The
performance with Proto Only Condition was relatively similar (M=75%, SD=41%), while that with
Uniform Condition (M=55%, SD=38%, 37% decrease) and Raw Condition (M=56%, SD=33%, 36%
decrease) was substantially lower. In terms of speed, subjects were most efficient using Proto Only
Condition (M=1.04 mins/question, SD=0.28, 44% decrease compared to Raw Condition), followed
by Uniform Condition (M=1.31 mins/question, SD=0.59) and Proto and Criticism Condition (M=1.37
mins/question, SD=0.8). Subjects spent the most time with Raw Condition (M=1.86 mins/question,
SD=0.67).
Subjects indicated their preference of Proto and Criticism Condition over Raw Condition and
Uniform Condition. In a survey question that asks to compare Proto and Criticism Condition and
Raw Condition, a subject added that ?[Proto and Criticism Condition resulted in] less confusion
from trying to discover hidden patterns in a ton of images, more clues indicating what features are
important". In particular, in a question that asks to compare Proto and Criticism Condition and
Proto Only Condition, a subject said that ?The addition of criticisms made it easier to locate the
defining features of the cluster within the prototypical images". The humans? superior performance
with prototypes and criticism in this preliminary study shows that providing criticisms together with
prototypes is a promising direction to improve the interpretability.
6
Conclusion
We present the MMD-critic, a scalable framework for prototype and criticism selection to improve
the interpretability of complex data distributions. To our best knowledge, ours is the first work which
leverages the BMC framework to generate explanations. Further, MMD-critic shows competitive
performance as a nearest prototype classifier compared to to existing methods. When criticism is
given together with prototypes, a human pilot study suggests that humans are better able to perform a
predictive task that requires the data-distributions to be well-explained. This suggests that criticism
and prototypes are a step towards improving interpretability of complex data distributions. For future
work, we hope to further explore the properties of MMD-critic such as the effect of the choice of
kernel, and weaker conditions on the kernel matrix for submodularity. We plan to explore applications
to larger datasets, aided by recent work on distributed algorithms for submodular optimization. We
also intend to complete a larger scale user study on how criticism and prototypes presented together
affect human understanding.
8
References
A. Aamodt and E. Plaza. Case-based reasoning: Foundational issues, methodological variations, and system
approaches. AI communications, 1994.
A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization: Massive
data summarization on the fly. In KDD. ACM, 2014.
I. Bichindaritz and C. Marling. Case-based reasoning in the health sciences: What?s next? AI in medicine, 2006.
J. Bien and R. Tibshirani. Prototype selection for interpretable classification. The Annals of Applied Statistics,
pages 2403?2424, 2011.
R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. Intelligible models for healthcare: Predicting
pneumonia risk and hospital 30-day readmission. In KDD, 2015.
M.S. Cohen, J.T. Freeman, and S. Wolf. Metarecognition in time-stressed decision making: Recognizing,
critiquing, and correcting. Human Factors, 1996.
J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database.
In CVPR, 2009.
U. Feige. A threshold of ln n for approximating set cover. JACM, 1998.
A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin. Bayesian data analysis. Taylor & Francis, 2014.
A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Sch?lkopf, and A. Smola. A kernel method for the two-sample
problem. JMLR, 2008.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, 2015.
J.J. Hull. A database for handwritten text recognition research. TPAMI, 1994.
T.S. Jaakkola, D. Haussler, et al. Exploiting generative models in discriminative classifiers. In NIPS, pages
487?493, 1999.
L. Kaufman and P. Rousseeuw. Clustering by means of medoids. North-Holland, 1987.
B. Kim, C. Rudin, and J.A. Shah. The Bayesian Case Model: A generative approach for case-based reasoning
and prototype classification. In NIPS, 2014.
O.O. Koyejo, R. Khanna, J. Ghosh, and R. Poldrack. On prior distributions and approximate inference for
structured variables. In NIPS, 2014.
A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient
algorithms and empirical studies. JMLR, 2008.
L. I. Kuncheva and J.C. Bezdek. Nearest prototype classification: clustering, genetic algorithms, or random
search? IEEE Transactions on Systems, Man, and Cybernetics, 28(1):160?164, 1998.
H. Lin and J. Bilmes. A class of submodular functions for document summarization. In ACL, 2011.
J. R. Lloyd and Z. Ghahramani. Statistical model criticism using kernel two sample tests. In NIPS, 2015.
B. Mirzasoleiman, A. Karbasi, A. Badanidiyuru, and A. Krause. Distributed submodular cover: Succinctly
summarizing massive data. In NIPS, 2015.
G. L Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set
functions. Mathematical Programming, 1978.
A. Newell and H.A. Simon. Human problem solving. Prentice-Hall Englewood Cliffs, 1972.
C.E. Priebe, D.J. Marchette, J.G. DeVinney, and D.A. Socolinsky. Classification using class cover catch digraphs.
Journal of classification, 2003.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
A.C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
D. Sharma, A. Kapoor, and A. Deshpande. On greedy maximization of entropy. In ICML, 2015.
I. Simon, N. Snavely, and S.M. Seitz. Scene summarization for online image collections. In ICCV, 2007.
K.R. Varshney. Engineering safety in machine learning. arXiv:1601.04126, 2016.
9
| 6300 |@word determinant:3 polynomial:1 seitz:1 asks:2 reduction:1 contains:2 score:3 selecting:6 offering:1 ours:2 rkhs:2 document:3 tuned:1 subjective:2 existing:2 genetic:1 comparing:1 surprising:1 assigning:1 must:2 written:3 additive:2 kdd:2 remove:1 designed:2 gist:1 interpretable:6 v:1 alone:2 greedy:9 selected:8 generative:2 rudin:1 mental:2 provides:1 preference:1 zhang:1 mathematical:1 along:2 become:1 j2s:7 qualitative:2 prove:2 khk:1 consists:1 fitting:1 ijcv:1 manner:1 pairwise:2 inter:1 behavior:1 uiuc:1 freeman:1 relying:1 decreasing:1 enumeration:1 cardinality:3 increasing:2 spain:1 discover:1 notation:3 linearity:2 bounded:2 what:3 kaufman:2 argmin:1 kind:1 substantially:1 developed:2 ghosh:1 guarantee:3 quantitative:1 mitigate:1 returning:1 classifier:11 healthcare:1 positive:6 safety:1 engineering:1 local:9 sd:8 consequence:1 cliff:1 black:1 chose:1 acl:1 weakened:1 studied:1 collect:1 suggests:2 bi:2 range:1 unique:1 testing:1 yj:3 practice:5 digit:6 foundational:1 empirical:1 significantly:1 convenient:1 radial:4 suggest:1 selection:16 gelman:2 prentice:1 risk:2 writing:1 optimize:2 equivalent:1 imposed:1 maximizing:1 oluwasanmi:1 rajiv:1 regardless:1 l2c:1 survey:4 correcting:1 insight:1 haussler:1 classic:2 population:1 exploratory:1 n12:1 variation:2 embedding:1 annals:1 construction:1 user:4 massive:2 programming:1 us:1 element:7 approximated:2 particularly:1 recognition:3 database:2 fly:1 solved:1 capture:3 ensures:1 cycle:1 sun:1 decrease:6 movement:1 principled:1 balanced:2 complexity:1 asked:1 singh:1 solving:1 badanidiyuru:3 predictive:3 deliver:1 division:2 efficiency:2 basis:4 usps:5 easily:3 indirect:1 mmd2b:1 various:1 represented:1 regularizer:3 grown:1 fast:1 effective:3 describe:1 quite:3 heuristic:1 widely:2 larger:4 cvpr:1 particularity:1 supermodularity:1 otherwise:2 statistic:3 gi:3 breed:3 online:1 tpami:1 took:1 product:1 maximal:2 frequent:1 j2:5 relevant:1 rapidly:1 kapoor:1 iff:1 achieve:3 scalability:1 exploiting:1 empty:1 p:2 cluster:1 mirzasoleiman:3 perfect:1 i2s:3 help:1 spent:1 develop:1 pose:1 measured:1 nearest:10 progress:1 kuncheva:2 strong:1 coverage:2 involves:1 come:1 critiquing:1 rasch:1 direction:1 submodularity:7 hull:3 human:17 require:2 assign:1 generalization:3 preliminary:3 enumerated:1 helping:1 sans:1 hold:2 sufficiently:2 koch:1 ground:2 hall:1 exp:3 scope:1 predict:2 achieves:3 readmission:1 purpose:2 condensed:1 combinatorial:1 label:2 utexas:1 largest:1 vice:1 successfully:1 gehrke:1 hope:1 mit:1 clearly:1 sensor:1 gaussian:1 rather:1 pn:1 jaakkola:2 corollary:7 validated:1 focus:1 improvement:1 methodological:1 check:1 criticism:60 kim:3 baseline:4 sense:1 helpful:2 inference:3 summarizing:1 streaming:1 diminishing:1 hidden:1 kc:2 selects:5 arg:1 classification:11 issue:1 denoted:3 priori:1 development:2 plan:1 constrained:1 special:2 marginal:1 equal:1 construct:2 encouraged:1 bmc:3 s22:2 unsupervised:1 icml:1 discrepancy:5 future:2 jb:10 np:2 quantitatively:3 simplify:1 employ:2 primarily:2 few:3 duplicate:1 bezdek:2 randomly:3 resulted:1 argmax:1 statistician:1 maintain:1 negation:1 cbr:2 interest:4 englewood:1 highly:2 evaluation:1 male:1 regularizers:1 encourage:1 necessary:1 rhks:1 unless:1 indexed:1 fox:3 euclidean:1 taylor:1 desired:1 e0:5 theoretical:1 fitted:1 stopped:1 instance:3 modeling:1 cover:6 caruana:2 maximization:4 cost:6 deviation:3 subset:8 entry:7 uniform:5 recognizing:1 conducted:1 reported:1 answer:3 kxi:3 combined:1 density:3 fundamental:1 borgwardt:1 csail:1 probabilistic:1 off:3 dong:1 nk2:1 together:5 quickly:2 nm:1 satisfied:4 huang:1 leading:2 return:2 li:2 suggesting:1 potential:3 account:1 diversity:1 lloyd:3 tactical:1 includes:1 north:1 satisfy:1 performed:1 view:2 picked:1 break:1 overfits:1 sup:1 portion:1 competitive:2 characterizes:1 participant:3 complicated:1 red:1 francis:1 misunderstanding:1 simon:5 minimize:3 square:2 accuracy:4 efficiently:4 identify:2 lkopf:1 bayesian:9 handwritten:2 raw:8 accurately:1 ren:1 bilmes:3 researcher:1 russakovsky:2 cybernetics:1 explain:3 mmd2:8 whenever:1 checked:3 definition:3 deshpande:1 associated:2 proof:2 gain:1 pilot:5 dataset:13 sampled:3 popular:1 knowledge:2 ut:1 ubiquitous:1 hilbert:2 underfits:1 coloring:1 manuscript:1 supermodular:3 supervised:1 day:1 improved:3 evaluated:4 smola:1 overfit:1 hand:4 sturm:1 bichindaritz:2 ei:3 su:1 khanna:2 quality:2 reveal:1 perhaps:1 indicated:1 facilitate:2 effect:6 excessively:1 normalized:5 true:1 regularization:6 assigned:2 symmetric:1 i2:5 white:1 indistinguishable:1 encourages:1 trying:1 fatigue:1 theoretic:1 complete:1 confusion:1 allen:1 reasoning:6 image:17 wise:5 novel:2 recently:2 fi:2 common:1 superior:1 specialized:1 poldrack:1 overview:1 cohen:2 foreach:1 interpretation:1 he:2 significant:1 versa:1 ai:5 outlined:2 trivially:1 similarly:1 illinois:1 submodular:21 marchette:1 similarity:6 add:1 dominant:1 showed:2 recent:2 optimizing:1 belongs:3 apart:1 certain:1 binary:1 arbitrarily:1 yi:5 captured:1 guestrin:1 additional:5 fortunately:1 kit:1 ey:2 deng:3 employed:1 sharma:2 maximize:3 monotonically:3 ii:1 full:1 desirable:1 gretton:3 transparency:1 reduces:1 technical:1 cross:1 lin:3 equally:1 prediction:1 scalable:4 expectation:3 arxiv:2 represent:5 kernel:44 mmd:39 sometimes:1 achieved:1 addition:5 krause:5 koyejo:3 appropriately:1 sch:1 subject:12 induced:1 call:2 integer:1 unitary:1 near:1 practitioner:1 leverage:2 bernstein:1 latin:1 enough:2 concerned:1 easy:1 embeddings:1 xj:17 fit:1 zi:2 likert:1 affect:1 carlin:1 inner:1 prototype:71 det:2 motivated:1 expression:1 six:1 utility:1 effort:2 beenkim:1 deep:1 useful:6 generally:1 clear:1 tune:1 karpathy:1 rousseeuw:2 category:2 generate:2 zj:3 sign:1 medoid:2 tibshirani:5 correctly:2 per:2 diverse:1 discrete:3 hyperparameter:1 dominance:5 group:8 four:4 elhadad:1 threshold:1 capital:1 clarity:1 clean:1 monotone:6 fraction:1 sum:2 run:1 respond:1 almost:1 reasonable:1 decision:5 comparable:4 capturing:1 ki:10 bound:2 hi:5 followed:1 replaces:1 plaza:2 placement:1 constraint:1 fei:4 n3:1 scene:2 speed:2 argument:1 min:5 optimality:1 answered:1 relatively:1 structured:1 feige:2 across:1 suppressed:1 making:3 wherever:1 explained:5 outlier:1 medoids:3 karbasi:2 iccv:1 ln:1 previously:2 abbreviated:1 fail:1 end:3 unusual:1 costume:1 apply:1 observe:4 hierarchical:1 appropriate:1 ubiquity:1 alternative:1 shah:1 substitute:1 denotes:2 standardized:3 include:2 clustering:5 pushing:1 medicine:1 exploit:1 restrictive:1 ghahramani:3 build:1 aamodt:2 especially:1 approximating:1 objective:3 intend:1 question:15 added:1 strategy:2 primary:1 pneumonia:2 snavely:1 diagonal:11 said:1 nemhauser:4 distance:1 lou:1 parametrized:2 considers:1 assuming:1 index:6 providing:1 balance:1 marling:2 innovation:1 potentially:1 greyscale:1 negative:4 resurgence:1 priebe:2 design:2 implementation:1 stern:1 summarization:6 satheesh:1 contributed:1 perform:4 upper:1 observation:1 summarising:1 datasets:5 finite:1 defining:1 witness:7 variability:1 communication:1 locate:1 rn:4 reproducing:1 overlooked:1 complement:1 pair:2 required:2 dog:4 imagenet:5 conflict:1 learned:1 barcelona:1 nip:6 beyond:2 able:1 pattern:2 criticize:1 bien:5 summarize:1 challenge:1 program:1 built:1 interpretability:16 including:2 explanation:5 max:6 analogue:1 power:4 memory:1 regularized:1 predicting:1 indicator:1 residual:1 improve:7 picture:3 catch:1 health:1 speeding:1 deviate:1 text:1 understanding:3 literature:5 prior:1 prototypical:4 wolsey:1 sufficient:4 rubin:1 critic:24 balancing:1 austin:1 succinctly:1 misrepresents:1 diagonally:1 bias:4 weaker:1 understand:1 institute:1 wide:1 neighbor:4 absolute:2 benefit:1 distributed:3 dimension:1 world:2 evaluating:1 author:1 forward:2 made:2 clue:1 simplified:1 collection:1 transaction:1 functionals:1 approximate:2 varshney:2 ml:1 global:5 assumed:1 xi:21 discriminative:1 search:2 iterative:1 khosla:1 promising:1 learn:3 improving:1 complex:7 necessarily:1 domain:2 underfit:1 intelligible:1 hyperparameters:1 n2:4 fig:1 representative:1 aid:2 sub:1 xl:2 jmlr:2 third:1 learns:2 theorem:7 showing:1 maxi:1 explored:1 x:5 ton:1 maximizers:1 exists:1 serif:1 socher:1 restricting:1 adding:1 importance:1 magnitude:1 margin:1 easier:1 entropy:1 led:1 explore:2 jacm:1 visual:1 expressed:2 contained:1 holland:1 monotonic:2 newell:2 wolf:1 determines:1 satisfies:2 acm:1 ma:1 sanmi:1 digraph:2 rbf:3 towards:1 fisher:2 feasible:1 hard:1 aided:1 included:2 man:1 uniformly:1 averaging:2 lemma:2 specie:3 hospital:1 meaningful:1 rarely:2 select:2 indicating:1 berg:1 pfor:1 stressed:1 signifying:1 incorporate:1 evaluate:4 proto:13 ex:5 |
5,860 | 6,301 | Large-Scale Price Optimization via Network Flow
Shinji Ito
NEC Corporation
[email protected]
Ryohei Fujimaki
NEC Corporation
[email protected]
Abstract
This paper deals with price optimization, which is to find the best pricing strategy
that maximizes revenue or profit, on the basis of demand forecasting models.
Though recent advances in regression technologies have made it possible to reveal
price-demand relationship of a large number of products, most existing price
optimization methods, such as mixed integer programming formulation, cannot
handle tens or hundreds of products because of their high computational costs. To
cope with this problem, this paper proposes a novel approach based on network
flow algorithms. We reveal a connection between supermodularity of the revenue
and cross elasticity of demand. On the basis of this connection, we propose an
efficient algorithm that employs network flow algorithms. The proposed algorithm
can handle hundreds or thousands of products, and returns an exact optimal solution
under an assumption regarding cross elasticity of demand. Even if the assumption
does not hold, the proposed algorithm can efficiently find approximate solutions as
good as other state-of-the-art methods, as empirical results show.
1
Introduction
Price optimization is a central research topic with respect to revenue management in marketing
science [10, 16, 18]. The goal is to find the best price strategy (a set of prices for multiple products)
that maximizes revenue or profit. There is a lot of literature regarding price optimization [1, 5, 10,
13, 17, 18, 20], and significant success has been achieved in industries such as online retail [7],
fast-fashion [5], hotels [13, 14], and airlines [16]. One key component in price optimization is
demand modeling, which reveals relationships between price and demand. Though traditional studies
have focused more on a single price-demand relationship, such as price elasticity of demand [13, 14]
and the law of diminishing marginal utility [16], multi-product relationships such as cross price
elasticity of demand [15] have recently received increased attention [5, 17]. Recent advances in
regression technologies (non-linear, sparse, etc.) make demand modeling over tens or even hundreds
of products possible, and data oriented demand modeling has become more and more important.
Given demand models of multiple products, the role of optimization is to find the best price strategy.
Most existing studies for multi-product price optimization employ mixed-integer programming [5, 13,
14] due to the discrete nature of individual prices, but their methods cannot be applied to large scale
problems with tens or hundreds of products since their computational costs exponentially increases
over increasing numbers of products. Though restricting demand models might make optimization
problems tractable [5, 7], such approaches cannot capture complicated price-demand relationships
and often result in poor performance. Ito and Fujimaki [9] have recently proposed a prescriptive
price optimization framework to efficiently solve multi-product price optimization with non-linear
demand models. In this prescriptive price optimization, the problem is transformed into a sort of
binary quadratic programming problem, and they have proposed an efficient relaxation method
based on semi-definite programming (SDP). Although their approach has significantly improved
computational efficiency over that of mixed-integer approaches, the computational complexity of
their SDP formulation requires O(M 6 ) in theory, where M is the number of products, and it is not
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
sufficiently scalable for large scale problems with hundreds of products, as our empirical evaluation
show in Section 5.
The goal of this paper is to develop an efficient algorithm for large scale multi-product price optimization problems that can handle hundreds of products as well as flexible demand models. Our main
technical contributions are two-fold. First, we reveal the connection between submodularity of the
revenue and cross elasticity of demand. More specifically, we show that the gross profit function of
the prescriptive price optimization is supermodular (i.e., the maximization of the gross profit function
is equivalent to the submodular minimization) under the assumption regarding cross elasticity of
demand that there are no pairs of complementary goods (we refer to this property as a substitute-goods
property).1 On the basis of the submodularity, we propose a practical, efficient algorithm that employs
network flow algorithms for minimum cut problems and returns exact solutions for problems with the
substitute-goods property. Further, even in cases in which the property does not hold, it can efficiently
find approximate solutions by iteratively improving submodular lower bounds. Our empirical results
show that the proposed algorithm can successfully handle hundreds of products and derive solutions
as good as other state-of-the-art methods, while its computational cost is much cheaper, regardless of
whether the substitute-goods property holds or not.
2
Literature review
Our price optimization problems are reduced to binary quadratic problems such as (4). It is well
known that submodular binary quadratic programming problems can be reduced to minimum cut
problems [12], and hence it can be solved by maximum flow algorithms. Also for unconstrained nonsubmodular binary quadratic programming problems, there is a lot of literature regarding optimization
algorithm using minimum cut, especially in the context of Markov random fields inference or energy
minimization in computer vision [2, 3, 4, 8, 11, 22]. Above all, QPBO method [2, 11] and its
extensions such as QPBOI method [19] are known to be state-of-the-art methods in terms of scalability
and theoretical properties. These QPBO/QPBOI and our method are similar in that they all employ
network flow algorithms and derive not only partial/approximate solutions but also lower bounds of
the exact optimal (minimum) value. Our methods, however, differs from QPBO and its extensions in
network structures, accuracy and scalability, as is shown in Section 5.
3
Price optimization and submodularity in cross elasticity of demand
Suppose we have M products and a product index is denoted by i ? {1, . . . , M }. In prescriptive price
optimization [9], for a price strategy p = [p1 , . . . , pM ]> , where pi is the price of the i-th product,
and for external variables r = [r1 , . . . , rD ]> such as weather, temperature and days of the week, the
sales quantity (demand) for the i-th product is modeled by the following regression formula:
qi (p, r) =
M
X
fij (pj ) +
D
X
git (rt ),
(1)
t=1
j=1
where fii expresses the effect of price elasticity of demand, fij (i 6= j) reflects the effect of cross
elasticity, and git represent how the t-th external variable affect the sales quantity. Note that fij for
all (i, j) can be arbitrary functions, and Eq. (1) covers various regression (demand) models, such
as linear regression, additive models [21], linear regression models with univariate basis functions,
etc. This paper assumes that the regression models are given using existing methods and focuses its
discussion on optimization.
Given qi (p) for all i and a cost vector c = [c1 , . . . , cM ]> , and fixed external variables r, the gross
profit can be represented as
?
?
M
M
M
D
X
X
X
X
`(p) =
(pi ? ci )qi (p) =
(pi ? ci ) ?
fij (pj ) +
git (rt )? .
(2)
i=1
i=1
j=1
1
t=1
"Complementary goods" and "substitute goods" are terms in economics. A good example of complementary
goods might be wine and cheese, i.e., if we discount wine, the sales of cheese will increase. An example of
substitute goods might be products of different brands in the same product category. If we discount one product,
sales of the other products will decrease.
2
The goal of price optimization is to find p maximizing `(p). In practice, pm is often chosen from
the finite set Pi = {Pi1 , . . . , PiK } ? R of K price candidates, where PiK might be a list price and
Pik (k < K) might be discounted prices such as 10%-off, 5%-off, 3%-off. Then, the problem of
maximizing the gross profit can be formulated as the following combinatorial optimization problem:
Maximize
`(p) subject to pi ? Pi .
(3)
It is trivial to show that (3) is NP-hard in general.
Let us formally define the "substitute-goods property" as follows.
Definition 1 (Substitute-Goods Property). The demand model defined by (1) of the i-th product is
said to satisfy the substitute-goods property if fij is monotone non-decreasing for all j 6= i.
The concept of substitute-goods property is practical and important because retailers often deal with
substitute goods. Suppose the situation that a retailer decides a price strategy of different brand in the
same products category. For example, supermarkets sell milk of different brands and car dealerships
sell various types of cars. These products are usually substitute goods. This kind of cross elasticity
effect is one of advanced topics in revenue management and is practically important [13, 14, 17].
Our key observation is the connection between the substitute-goods property in marketing science
and the supermodularity of the gross profit function, which is formally described in the following
proposition.
Proposition 2. The gross profit function ` : P1 ? ? ? ? ? PM ? R is supermodular2 if demand models
defined by (1) for all products satisfies the substitute-goods property.
The above proposition implies that, under the assumption of the substitute-goods property, problem
(3) can be solved precisely using submodular minimization algorithms, where time complexity is a
polynomial in M and K. This fact, however, does not necessarily imply that there exists a practical,
efficient algorithm for problem (3). Indeed, general submodular minimization algorithms are slow in
practice even though their time complexities are polynomial. Further, actual models do not always
satisfy the substitute-goods property. We propose solutions to these problems in the next section.
4
4.1
Network flow-based algorithm for revenue maximization
Binary quadratic programming formulation
This section shows that problem (3) can be reduced to the following binary quadratic programming
problem (notations are explained in the latter part of this section):
Minimize
subject to
x> Ax + b> x
x = [x1 , . . . , xn ]> ? {0, 1}n ,
xu ? xv ((u, v) ? C),
(4)
Each variable pi takes Pik if and only if the binary vector xi = [xi1 , . . . , xi,K?1 ]> ? {0, 1}(K?1)
satisfies:
xi = ck := [1, . . . , 1, 0, . . . , 0]>
| {z } | {z }
k?1
(k = 1, . . . , K).
(5)
K?k
> >
(K?1)M
Also we define x = [x>
and redefine the indices of the entries of x as
1 , . . . , xM ] ? {0, 1}
x = [x1 , x2 , . . . , x(K?1)M ], i.e. xi,k = xi(K?1)+k for notational simplicity.
Defining `ij : Pi ? Pj ? R by `ij (pi , pj ) = (pi ? ci )fij (pj ) for i 6= j and `i : Pi ? R by
PD
`i (pi ) = (pi ? ci )(fii (pi ) + t=1 git (rt )), we can express ` as
`(p) =
X
`ij (pi , pj ) +
M
X
`i (pi ).
(6)
i=1
1?i,j?M,i6=j
2
We say that a function f : D1 ? ? ? ? ? Dn ? R (Dj ? R) is submodular if f (x) + f (y) ? f (x ? y) +
f (x ? y) for all x, y, where x ? y and x ? y denote the coordinate-wise maximum and minimum, respectively.
We say a function f is supermodular if ?f is submodular.
3
Algorithm 1 s-t cut for price optimization with the substitute-goods property
Input: Problem instance (A, b, C) of (4), where all entries of A are non-positive.
Output: An optimal solution x? to (4).
1: Construct a weighted directed graph G = (V, E, w) satisfying (9).
2: Add edges C with weight ? to G, i.e., set E ? E ? C and w(u, v) ? ? for all (u, v) ? C.
3: Compute a minimum s-t cut U ? of G, define x? by (10) and return x? .
Using xi , we can construct matrices Aij ? R(K?1)?(K?1) for which it holds that
`ij (pi , pj ) = ?x>
i Aij xj + const.
(7)
(K?1)?(K?1)
Indeed, matrices Aij = [aij
defined by
uv ]1?u,v?K?1 ? R
aij
uv = ?`ij (Pi,u+1 , Pj,v+1 ) + `ij (Pi,u , Pj,v+1 ) + `ij (Pi,u+1 , Pj,v ) ? `ij (Pi,u , Pj,v )
(8)
satisfy (7). In a similar way, we can construct bi ? RK?1 such that `i (pi ) = ?b>
i xi + const.
>
Accordingly, the objective function ` of problem (3) satisfies `(p) = ?(x Ax + b> x) + const,
where we define A = [Aij ]1?i,j?M ? R(K?1)M ?(K?1)M and b = [bi ]1?i?M ? R(K?1)M . The
conditions xi ? {c1 , . . . , cK } (i = 1, . . . , M ) can be expressed as xu ? xv ((u, v) ? C),
where we define C := {((K ? 1)(i ? 1) + k + 1, (K ? 1)(i ? 1) + k) | 1 ? i ? M, 1 ? k ?
K ? 2}. Consequently, problem (3) is reduced to problem (4). Although [9] also gives another
BQP formulation for the problem (3) and relaxes it to a semi-definite programming problem, our
construction of the BQP problem can be solved much more efficiently, as is explained in the next
section.
4.2
Minimum cut for problems with substitute goods property
As is easily seen from (8), if the problem satisfies the substitute-goods property, matrix A has only
non-positive entries. It is well known that unconstrained binary quadratic programming problems
such as (4) with non-positive A ? Rn?n and C = ? can be efficiently solved3 by algorithms
for minimum cut [6]. Indeed, we can construct a positive weighted directed graph, G = (V =
{s, t, 1, 2, . . . , n}, E ? V ? V, w : E ? R>0 ? {?})4 for which
x> Ax + b> x = cG ({s} ? {u | xu = 1}) + const
n
(9)
5
holds for all x ? {0, 1} , where cG is the cut function of graph G . Hence, once we can compute a
minimum s-t cut U that is a vertex set U ? V minimizing cG (U ) subject to s ? U and t ?
/ U , we
can construct an optimal solution x = [x1 , . . . , xn ]> to the problem (4) by setting
1
(u ? U )
xu =
(u = 1, . . . , n).
(10)
0
(u ?
/ U)
For constrained problems such as (4) with C 6= ?, the constraint xu ? xv is equivalent to xu =
1 =? xv = 1. This condition can be, in the minimum cut problem, expressed as u ? U =? v ? U .
By adding a directed edge (u, v) with weight ?, we can forbid the minimum cut to violate the
constraints. In fact, if both u ? U and v ?
/ U hold, the value of the cut function is ?, and hence such
a U cannot be a minimum cut. We summarize this in Algorithm 1.
4.3
Submodular relaxation for problems without the substitute-goods property
For problems without the substitute-goods property, we first decompose the matrix A into A+ and
?
?
n?n
A? so that A+ + A? = A, where A+ = [a+
are given by
uv ] and A = [auv ] ? R
auv
(auv ? 0)
0
(auv ? 0)
a+
,
a?
(u, v ? N ).
(11)
uv =
uv =
0
(auv < 0)
auv
(auv < 0)
3
The computational cost of the minimum cut depends on the choice of algorithms. For example, if we use
Dinic?s method, the time complexity is O(n3 log n) = O((KM )3 log(KM )).
4
s, t are auxiliary vertices different from 1, . . . , n corresponding to source, sink in maximum flow problems.
5
For details about the construction of G, see, e.g., [4, 12].
4
This leads to a decomposition of the objective function of Problem (4) into supermodular and
submodular terms:
x> Ax + b> x = x> A+ x + x> A? x + b> x,
(12)
where x> A+ x is supermodular and x> A? x + b> x is submodular. Our approach is to replace the
supermodular term x> A+ x by a linear function to construct a submodular function approximating
x> Ax+b> x, that can be minimized by Algorithm 1. Similar approaches can be found in the literature,
e.g. [8, 22], but ours has a significant point of difference; our method constructs approximate functions
bounding objectives from below, which provides information about the degree of accuracy.
Consider an affine function h(x) such that h(x) ? x> A+ x for all x ? {0, 1}n . Such an h can be
constructed as follows. Since
?uv (xu + xv ? 1) ? xu xv
(xu , xv ? {0, 1})
(13)
holds for all ?uv ? [0, 1], an arbitrary matrix ? ? [0, 1]n?n satisfies
x> A+ x ? x> (A+ ? ?)1 + 1> (A+ ? ?)x ? 1> (A+ ? ?)1 =: h? (x),
(14)
where A+ ? ? denotes the Hadamard product, i.e., (A+ ? ?)uv = a+
uv ? ?uv . From inequality (14),
the optimal value of the following problem,
x> A? x + b> x + h? (x)
x = [x1 , . . . , xn ]> ? {0, 1}n ,
xu ? xv ((u, v) ? C),
Minimize
subject to
(15)
is a lower bound for that of problem (4). Since A? has non-positive entries and b> x + h? (x) is
affine, we can solve (15) using Algorithm 1 to obtain an approximate solution for (4) and a lower
bound for the optimal value of (4).
4.4
Proximal gradient method with sequential submodular relaxation
An essential problem in submodular relaxation is how to choose ? ? [0, 1]n?n and to optimize x
given ?. Let ?(?) denote the optimal value of (15), i.e., define ?(?) by ?(?) = minx?R x> A? x +
b> x + h? (x), where R is the feasible region of (15). Then, for simultaneous optimization of x and
?, we consider the following problem:
Maximize ?(?) subject to ? ? [0, 1]n?n ,
(16)
which can be rewritten as follows:6
Minimize
? ?(?) + ?(?)
where we define ? : Rn?n ? R ? {?} by
0
?(?) =
?
subject to ? ? Rn?n ,
(? ? [0, 1]n?n )
.
(? ?
/ [0, 1]n?n )
(17)
(18)
Then, ??(?) is convex and (17) can be solved using a proximal gradient method.
Let ?t ? Rn?n denote the solution on the t-th step. Let xt be the optimal solution of (15) with
? = ?t , i.e.,
xt ? arg min{x> A? x + b> x + h?t (x)}.
x?R
(19)
The partial derivative of ?h? (x) w.r.t. ? at (?t , xt ), denoted by St , is then a subgradient of ??(?)
at ?t , which can be computed as follows:
St = A+ ? (11> ? xt 1> ? 1x>
t )
6
(20)
Problem (16) can be also solved using the ellipsoid method, which guarantees polynomial time-complexity
in the input size. However, it is known that the order of its polynomial is large and that the performance of
the algorithm can be poor in practice, especially for large size problems. To try to achieve more practical
performance, this paper proposes a proximal gradient algorithm.
5
Algorithm 2 An iterative relaxation algorithm for (4)
Input: Problem instance (A, b, C) of (4).
? to (4) satisfying (25), a lower bound ? of optimal value of (4).
Output: An approximate solution x
1: Set ?1 = 11> /2, t = 1, min_value = ?, ? = ??.
2: while Not converged do
3:
Compute xt satisfying (19) by using Algorithm 1, and compute
>
valuet = x>
t Axt + b xt ,
4:
5:
?
>
?t = x>
t A xt + b xt + h?t (xt ),
? = max{?, ?t }
if valuet < max_value then
? by
Update value and x
min_value = valuet ,
? = xt .
x
(24)
6:
end if
7:
Compute ?t+1 by (22) and (23).
8: end while
? , min_value and ?.
9: Return x
By using St and a decreasing sequence {?t } of positive real numbers, we can express the update
scheme for the proximal gradient method as follows:
1
k? ? ?t k2 + ?(?)},
(21)
?t+1 ? arg min {St ? ? +
n?n
2?
??R
t
We can compute ?t+1 satisfying (21) by
?t+1 = Proj[0,1]n?n (?t ? ?t St ),
(22)
where Proj[0,1] (X) is defined by
(
(Proj[0,1] (X))uv =
0
1
(X)uv
((X)uv < 0)
((X)uv > 1) .
(otherwise)
(23)
The proposed algorithm can be summarized as Algorithm 2.
The choice of {?t } has a major impact on the rate of the convergence of the?algorithm. From a
convergence analysis of the proximal gradient method, when we set ?t = ?(1/
? t), it is guaranteed
that ?t converge to the optimal value ?? of (16) and |?t ? ?? | = O(1/ t). Because ?(?) is
non-smooth and not strongly concave, there is no better guarantee of convergence rate, to the best of
our knowledge. In practice, however, we can observe the convergence in ? 10 steps iteration.
4.5
Initialization of ?
? ? denote an optimal solution to (15). We employ ?1 = 1/211> for the initialization of
Let x
? because (xu + xu ? 1)/2 is the tightest lower bound of xu xv in the max-norm sense, i.e.,
h(xu , xv ) = (xu + xv ? 1)/2 is the unique minimizer of maxxu ,xv ?{0,1} {|xu xv ? h(xu , xv )|},
??
subject to the constraints that h(xu , xv ) is affine and bounded from above by xu xv . In this case, x
is an approximate solution satisfying the following performance guarantee.
? ? satisfies
Proposition 3. If ? = 11> /2, then x
1 > +
>
?>
? ? ? x>
x
x? + b> x
(25)
? A?
? Ax? + b x? + 1 A 1,
2
where x? is an optimal solution to (4).
5
5.1
Experiments
Simulations
This section investigates behavior of Algorithm 2 on the basis of the simulation model used in [9],
and we compare the proposed method with state-of-the-art methods: the SDP relaxation method [9]
6
Table 1: Ranges of parameters in regression models. Table 2: Results on real retail data. (a) is
(i) is supermodular, (ii) is supermodular + submodular, computational time, (b) is estimated gross
profit, (c) is upper bound.
and (iii) is submodular.
(i)
(ii)
(iii)
?ij (i 6= j)
[0, 2]
[?25, 25]
[?2, 0]
?ii
[?2M, ?M ]
[?2M, 0]
[M ? 3, M ? 1]
?i
[M, 3M ]
[M, 3M ]
[1, 3]
(a)
(b)
(c)
actual
1403700
-
proposed
36[s]
1883252
1897393
QPBO
964[s]
1245568
1894555
and the QPBO and QBPOI methods [11]. We use SDPA 7.3.8 to solve SDP problems7 and use the
implementation of QPBO and QPBOI written by Kolmogolov.8 QPBO methods computes partial
labeling, i.e., there might remain unlabeled variables, and we set unlabeled variables to 0 in our
experiments. For computing a minimum s-t cut, we use Dinic?s algorithm [6]. All experiments were
conducted in a machine equipped with Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz, 768GB RAM.
We limited all processes to a single CPU core.
Revenue simulation model [9] The sales quantity qi of the i-th product was generated from the
PM
regression model qi = ?i + j=1 ?ij pj , where {?i } and {?ij } were generated by uniform distributions. We considered three types of uniform distributions to investigate the effect of submodularity,
as shown in Table 1, which correspond to three different situations: (i) all pairs of products are
substitute goods, i.e., the gross profit function is supermodular, (ii) half pairs are substitute goods
and the others are complementary goods, i.e., the gross profit function contains submodular terms
and supermodular terms, and (iii) all pairs are complementary goods, i.e., the gross profit function is
submodular. Price candidates Pi and cost ci for each product are fixed to Pi = {0.6, 0.7, . . . , 1.0}
and ci = 0, respectively.
Scalability and accuracy comparison We evaluated four methods in terms of computational
time (sec) and optimization accuracy (i.e. optimal values calculated by four methods). In addition
to calculating approximate optimal solutions and values, all four algorithms derive upper bounds of
exact optimal value, which provide information about how accurate the calculated solution.9 Fig. 1
shows the results with M = 30, 60, . . . , 300 for situations (i),(ii) and (iii). The plotted values are
arithmetic means of 5 random problem instances. We can observe that proposed, QPBO and QPBOI
methods derived exact solutions in the case (i), which can be confirmed from the computed upper
bounds coinciding with the values of objective function. For situations (ii) and (iii), on the other
hand, the upper bound and the objective value did not coincide and the solutions by QPBO were
worse than the others. The solutions by QPBOI and SDPrelax are as good as the proposed methods,
but their computational costs are significantly higher especially for the situations (ii) and (iii). For
all situations, the proposed method successfully derived solutions as good as the best of the four
methods did, and its computational cost was the lowest.
5.2
Real-world retail data
Data and settings We applied the proposed method to actual retail data from a middle-size supermarket located in Tokyo [23].10 We selected 50 regularly-sold beer products. The data range is
approximately three years from 2012/01 to 2014/12, and we used the first 35 months (1065 samples)
for training regression models and simulated the best price strategy for the next 20 days. Therefore,
the problem here was to determine 1000 prices (50 products ? 20 days).
(d)
For forecasting the sales quantity qi of the i-product on the d-th day, we use prices features
(d0 )
{pj }1?j?50,d?19?d0 ?d of 50 products for the 20 days before the d-th day. In addition to these
1000 linear price features, we employed ?day of the week" and ?month" features (both binary), as
well as temperature forecasting features (continuous), as external features. The price candidates
7
http://sdpa.sourceforge.net/
http://pub.ist.ac.at/~vnk/software.html
9
For example, the coincidence of the upper bound and the calculated optimal value implies that the algorithm
computed the exact optimal solution.
10
The Data has been provided by KSP-SP Co., LTD, http://www.ksp-sp.com.
8
7
250
150
100
50
0
50
100
150
200
250
150
100
50
0
300
0
50
M: number of products
150
200
250
1.1
1.0
0.9
0.8
proposed
QPBO
QPBOI
SDPrelax
0.7
0
50
100
150
200
150
100
50
0
300
0
50
250
M: number of products
(i) supermodular
300
150
200
250
300
250
300
1.2
1.05
1.00
0.95
0.90
0.85
proposed
QPBO
QPBOI
SDPrelax
0.80
0.75
0.70
100
M: number of products
1.10
value of objective function
value of objective function
1.2
0.6
100
proposed
QPBO
QPBOI
SDPrelax
200
M: number of products
value of objective function
0
250
proposed
QPBO
QPBOI
SDPrelax
200
computational time [s]
proposed
QPBO
QPBOI
SDPrelax
200
computational time [s]
computational time [s]
250
0
50
100
150
200
250
300
M: number of products
(ii) supermodular + submodular
1.0
0.8
0.6
0.4
proposed
QPBO
QPBOI
SDPrelax
0.2
0.0
0
50
100
150
200
M: number of products
(iii) submodular
Figure 1: Comparisons of proposed, QPBO, QPBOI, and SDPrelax methods on revenue simulation
data. The horizontal axis represents the number M of products. The vertical axes represent computational time (top) and optimal values of four methods (3) (bottom). For the bottom, circle markers
with dashed line represent the computed upper bounds of the optimal values, and optimal values and
upper bounds are normalized so that upper bounds with the proposed method are equal to 1.
(d)
{Pik }5k=1 were generated by splitting equally the range [Pi1 , Pi5 ], where Pi1 and Pi5 are the highest
(d)
and lowest prices of the i-th product in the historical data. We assumed that the cost ci was
0.3Pi5 (30% of the list prices). Our objective was to obtain a price strategy for 50-products over
the 20 days, from the 1066-th to 1085-th, which involves 1000-dimensional variables, in order
to maximize the sum of the gross profit for the 20 days. We estimated parameters in regression
models, using the ridge regression method. The estimated model contained 310293 pairs with the
substitute-goods property and 189207 pairs with complementary goods property.
The results are summarized in Table 2, where ?actual? means the gross profit computed on the basis
of the historical data regarding sales quantities and prices over the 20 days, from the 1066-th to
(d)
1085-th, and costs ci = 0.3Pi5 . Thus, the target is to find a strategy that expectedly achieves better
gross profit than ?actual?. We have omitted results for QPBOI and SDPrelax here because they did
not terminate after running over 8 hours. We observe that the proposed method successfully derived
a price strategy over 1000 products, which can be expected to increase gross profit significantly
in spite of its cheap computational cost, in contrast to QPBO, which failed with more expensive
computation. Although Table 2 shows results using a single CPU core for fair comparison, the
algorithm can be easily parallelized that can finish optimization in a few seconds. This makes it
possible to dynamically change prices in real time or enables price managers to flexibly explore a
better price strategy (changing a price range, target products, domain constraints, etc.)
6
Conclusion
In this paper we dealt with price optimization based on large-scale demand forecasting models. We
have shown that the gross profit function is supermodular under the assumption of the substitute-goods
property. On the basis of this supermodularity, we have proposed an efficient algorithm that employs
network flow algorithms and that returns exact solutions for problems with the substitute-goods
property. Even in case in which the property does not hold, the proposed algorithm can efficiently
find approximate solutions. Our empirical results have shown that the proposed algorithm can handle
hundreds/thousands products with much cheaper computational cost than other existing methods.
References
[1] G. Bitran and R. Caldentey. An overview of pricing models for revenue management. Manufacturing &
Service Operations Management, 5(3):203?229, 2003.
8
[2] E. Boros and P. L. Hammer. Pseudo-boolean optimization. Discrete applied mathematics, 123(1):155?225,
2002.
[3] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(9):1124?
1137, 2004.
[4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222?1239, 2001.
[5] F. Caro and J. Gallien. Clearance pricing optimization for a fast-fashion retailer. Operations Research,
60(6):1404?1422, 2012.
[6] T. H. Cormen. Introduction to algorithms. MIT press, 2009.
[7] K. J. Ferreira, B. H. A. Lee, and D. Simchi-Levi. Analytics for an online retailer: Demand forecasting and
price optimization. Manufacturing & Service Operations Management, pages 69?88, 2015.
[8] L. Gorelick, Y. Boykov, O. Veksler, I. Ayed, and A. Delong. Submodularization for binary pairwise
energies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
1154?1161, 2014.
[9] S. Ito and R. Fujimaki. Optimization beyond prediction: Prescriptive price optimization. ArXiv e-prints,
http://arxiv.org/abs/1605.05422, 2016.
[10] R. Klein. Revenue Management. Springer, 2008.
[11] V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts-a review. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 29(7):1274?1279, 2007.
[12] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? Pattern Analysis
and Machine Intelligence, IEEE Transactions on, 26(2):147?159, 2004.
[13] D. Koushik, J. A. Higbie, and C. Eister. Retail price optimization at intercontinental hotels group. Interfaces,
42(1):45?57, 2012.
[14] S. Lee. Study of demand models and price optimization performance. PhD thesis, Georgia Institute of
Technology, 2011.
[15] A. Marshall. Principles of Economics. Library of Economics and Liberty, 1920.
[16] J. I. McGill and G. J. Van Ryzin. Revenue management: Research overview and prospects. Transportation
science, 33(2):233?256, 1999.
[17] M. Natter, T. Reutterer, and A. Mild. Dynamic pricing support systems for diy retailers - a case study from
austria. Marketing Intelligence Review, 1:17?23, 2009.
[18] R. L. Phillips. Pricing and Revenue Optimization. Stanford University Press, 2005.
[19] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof
duality. In Computer Vision and Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8.
IEEE, 2007.
[20] P. Rusmevichientong, B. Van Roy, and P. W. Glynn. A nonparametric approach to multiproduct pricing.
Operations Research, 54(1):82?98, 2006.
[21] C. J. Stone. Additive regression and other nonparametric models. The annals of Statistics, pages 689?705,
1985.
[22] M. Tang, I. B. Ayed, and Y. Boykov. Pseudo-bound optimization for binary energies. In Computer
Vision?ECCV 2014, pages 691?707. Springer, 2014.
[23] J. Wang, R. Fujimaki, and Y. Motohashi. Trading interpretability for accuracy: Oblique treed sparse
additive models. In KDD, pages 1245?1254, 2015.
9
| 6301 |@word mild:1 middle:1 polynomial:4 norm:1 km:2 simulation:4 git:4 decomposition:1 profit:17 contains:1 pub:1 prescriptive:5 ours:1 existing:4 com:3 written:1 additive:3 kdd:1 cheap:1 enables:1 update:2 half:1 selected:1 intelligence:5 accordingly:1 core:2 oblique:1 provides:1 org:1 treed:1 dn:1 constructed:1 ryohei:1 become:1 redefine:1 pairwise:1 expected:1 indeed:3 behavior:1 p1:2 sdp:4 multi:4 manager:1 discounted:1 decreasing:2 actual:5 cpu:3 equipped:1 increasing:1 spain:1 provided:1 notation:1 bounded:1 maximizes:2 lowest:2 what:1 cm:1 kind:1 corporation:2 guarantee:3 pseudo:2 concave:1 axt:1 ferreira:1 k2:1 sale:7 positive:6 before:1 service:2 xv:16 approximately:1 might:6 initialization:2 dynamically:1 co:1 limited:1 analytics:1 bi:2 range:4 directed:3 practical:4 unique:1 practice:4 definite:2 differs:1 empirical:4 significantly:3 weather:1 vnk:1 spite:1 cannot:4 unlabeled:2 context:1 optimize:1 equivalent:2 www:1 transportation:1 maximizing:2 attention:1 regardless:1 economics:3 convex:1 focused:1 flexibly:1 simplicity:1 splitting:1 handle:5 coordinate:1 mcgill:1 construction:2 suppose:2 target:2 annals:1 exact:7 programming:10 roy:1 satisfying:5 expensive:1 located:1 recognition:2 cut:19 bottom:2 role:1 coincidence:1 solved:5 capture:1 wang:1 thousand:2 region:1 decrease:1 highest:1 prospect:1 gross:15 pd:1 complexity:5 dynamic:1 efficiency:1 basis:7 sink:1 easily:2 various:2 represented:1 kolmogorov:4 fast:3 labeling:1 stanford:1 solve:3 cvpr:1 say:2 supermodularity:3 otherwise:1 statistic:1 online:2 sequence:1 net:1 propose:3 product:48 hadamard:1 achieve:1 scalability:3 sourceforge:1 convergence:4 r1:1 derive:3 develop:1 ac:1 ij:11 received:1 eq:1 auxiliary:1 involves:1 implies:2 trading:1 liberty:1 submodularity:4 fij:6 tokyo:1 hammer:1 sdpa:2 decompose:1 proposition:4 extension:2 hold:8 practically:1 sufficiently:1 considered:1 week:2 major:1 achieves:1 omitted:1 wine:2 combinatorial:1 successfully:3 reflects:1 weighted:2 minimization:6 mit:1 always:1 ck:2 ax:7 focus:1 derived:3 notational:1 contrast:1 cg:3 sense:1 inference:1 mrfs:1 diminishing:1 proj:3 transformed:1 arg:2 ksp:2 flexible:1 html:1 denoted:2 proposes:2 art:4 constrained:1 delong:1 marginal:1 field:1 construct:7 once:1 equal:1 sell:2 represents:1 rfujimaki:1 minimized:2 np:1 others:2 employ:6 few:1 oriented:1 individual:1 cheaper:2 roof:1 ab:1 investigate:1 evaluation:1 fujimaki:4 accurate:1 edge:2 partial:3 elasticity:10 circle:1 plotted:1 theoretical:1 increased:1 industry:1 modeling:3 instance:3 xeon:1 boolean:1 cover:1 marshall:1 maximization:2 cost:12 vertex:2 entry:4 veksler:2 hundred:8 uniform:2 conducted:1 proximal:5 st:5 forbid:1 lee:2 off:3 xi1:1 diy:1 thesis:1 central:1 management:7 choose:1 worse:1 external:4 derivative:1 return:5 summarized:2 sec:1 rusmevichientong:1 satisfy:3 depends:1 try:1 lot:2 lab:1 sort:1 complicated:1 contribution:1 minimize:3 accuracy:5 efficiently:6 correspond:1 dealt:1 confirmed:1 converged:1 simultaneous:1 definition:1 energy:6 hotel:2 glynn:1 austria:1 knowledge:1 car:2 higher:1 supermodular:12 day:10 coinciding:1 improved:1 formulation:4 evaluated:1 though:4 strongly:1 marketing:3 hand:1 horizontal:1 marker:1 reveal:3 pricing:6 effect:4 concept:1 normalized:1 hence:3 iteratively:1 deal:2 clearance:1 stone:1 ridge:1 temperature:2 interface:1 wise:1 novel:1 recently:2 boykov:4 overview:2 jp:1 exponentially:1 caro:1 significant:2 refer:1 phillips:1 rd:1 unconstrained:2 uv:14 pm:4 i6:1 mathematics:1 submodular:19 dj:1 etc:3 add:1 fii:2 recent:2 optimizing:1 retailer:5 binary:12 success:1 inequality:1 seen:1 minimum:14 employed:1 parallelized:1 converge:1 maximize:3 v3:1 determine:1 dashed:1 arithmetic:1 semi:2 multiple:2 violate:1 ii:8 d0:2 smooth:1 technical:1 cross:8 nonsubmodular:2 equally:1 qi:6 impact:1 scalable:1 regression:13 prediction:1 vision:5 arxiv:2 iteration:1 represent:3 achieved:1 retail:5 c1:2 addition:2 source:1 airline:1 subject:7 regularly:1 flow:10 integer:3 qpboi:13 iii:7 relaxes:1 affect:1 xj:1 finish:1 gorelick:1 regarding:5 whether:1 utility:1 gb:1 ltd:1 forecasting:5 boros:1 ryzin:1 nonparametric:2 discount:2 ten:3 zabih:1 category:2 reduced:4 http:4 estimated:3 klein:1 discrete:2 express:3 ist:1 key:2 four:5 levi:1 group:1 changing:1 pj:13 ram:1 graph:6 relaxation:6 monotone:1 subgradient:1 year:1 sum:1 pik:5 investigates:1 bound:15 guaranteed:1 expectedly:1 fold:1 quadratic:7 auv:7 precisely:1 constraint:4 x2:1 n3:1 software:1 min:3 pi1:3 poor:2 cormen:1 remain:1 explained:2 tractable:1 end:2 tightest:1 rewritten:1 operation:4 observe:3 substitute:25 assumes:1 denotes:1 top:1 running:1 const:4 calculating:1 especially:3 approximating:1 objective:9 print:1 quantity:5 strategy:10 rt:3 traditional:1 said:1 gradient:5 minx:1 simulated:1 me:1 topic:2 trivial:1 rother:2 index:2 relationship:5 modeled:1 ellipsoid:1 minimizing:2 zabin:1 implementation:1 upper:8 vertical:1 observation:1 qpbo:17 markov:1 sold:1 finite:1 situation:6 defining:1 extended:1 rn:4 arbitrary:2 pair:6 connection:4 barcelona:1 hour:1 nip:1 beyond:1 usually:1 below:1 xm:1 pattern:6 summarize:1 max:3 interpretability:1 advanced:1 scheme:1 technology:3 imply:1 library:1 axis:1 supermarket:2 review:3 literature:4 shinji:1 law:1 mixed:3 revenue:13 degree:1 affine:3 beer:1 principle:1 pi:24 eccv:1 aij:6 institute:1 sparse:2 ghz:1 van:2 calculated:3 xn:3 world:1 computes:1 made:1 coincide:1 historical:2 cope:1 transaction:4 approximate:10 cheese:2 decides:1 reveals:1 assumed:1 xi:8 continuous:1 iterative:1 table:5 nature:1 terminate:1 improving:1 e5:1 necessarily:1 domain:1 did:3 sp:2 main:1 bounding:1 fair:1 complementary:6 x1:4 xu:19 fig:1 intel:1 fashion:2 georgia:1 slow:1 candidate:3 ito:4 tang:1 formula:1 rk:1 xt:10 list:2 exists:1 essential:1 restricting:1 adding:1 sequential:1 milk:1 ci:8 phd:1 nec:4 demand:27 univariate:1 explore:1 failed:1 expressed:2 contained:1 springer:2 minimizer:1 satisfies:6 lempitsky:1 goal:3 formulated:1 month:2 consequently:1 manufacturing:2 price:54 replace:1 feasible:1 hard:1 change:1 specifically:1 duality:1 experimental:1 brand:3 formally:2 support:1 latter:1 szummer:1 d1:1 |
5,861 | 6,302 | Low-Rank Regression with Tensor Responses
Guillaume Rabusseau and Hachem Kadri
Aix Marseille Univ, CNRS, LIF, Marseille, France
{firstname.lastname}@lif.univ-mrs.fr
Abstract
This paper proposes an efficient algorithm (HOLRR) to handle regression
tasks where the outputs have a tensor structure. We formulate the regression
problem as the minimization of a least square criterion under a multilinear
rank constraint, a difficult non convex problem. HOLRR computes efficiently
an approximate solution of this problem, with solid theoretical guarantees.
A kernel extension is also presented. Experiments on synthetic and real data
show that HOLRR computes accurate solutions while being computationally
very competitive.
1
Introduction
Recently, there has been an increasing interest in adapting machine learning and statistical
methods to tensors. Data with a natural tensor structure are encountered in many scientific
areas including neuroimaging [30], signal processing [4], spatio-temporal analysis [2] and
computer vision [16]. Extending multivariate regression methods to tensors is one of the
challenging task in this area. Most existing works extend linear models to the multilinear
setting and focus on the tensor structure of the input data (e.g. [24]). Little has been done
however to investigate learning methods for tensor-structured output data.
We consider a multilinear regression task where outputs are tensors; such a setting can occur
in the context of e.g. spatio-temporal forecasting or image reconstruction. In order to leverage
the tensor structure of the output data, we formulate the problem as the minimization of
a least squares criterion subject to a multilinear rank constraint on the regression tensor.
The rank constraint enforces the model to capture low-rank structure in the outputs and to
explain dependencies between inputs and outputs in a low-dimensional multilinear subspace.
Unlike previous work (e.g. [22, 24, 27]) we do not rely on a convex relaxation of this difficult
non-convex optimization problem. Instead we show that it is equivalent to a multilinear subspace identification problem for which we design a fast and efficient approximation algorithm
(HOLRR), along with a kernelized version which extends our approach to the nonlinear
setting (Section 3). Our theoretical analysis shows that HOLRR provides good approximation
guarantees. Furthermore, we derive a generalization bound for the class of tensor-valued
regression functions with bounded multilinear rank (Section 3.3). Experiments on synthetic
and real data are presented to validate our theoretical findings and show that HOLRR
computes accurate solutions while being computationally very competitive (Section 4).
Proofs of all results stated in the paper can be found in supplementary material A.
Related work. The problem we consider is a generalization of the reduced-rank regression
problem (Section 2.2) to tensor structured responses. Reduced-rank regression has its roots
in statistics [10] but it has also been investigated by the neural network community [3];
non-parametric extensions of this method have been proposed in [18] and [6]. In the context
of multi-task learning, a linear model using a tensor-rank penalization of a least squares
criterion has been proposed in [22] to take into account the multi-modal interactions between
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tasks. They propose an approach relying on a convex relaxation of the multlinear rank
constraint using the trace norms of the matricizations, and a non-convex approach based
on alternating minimization. Nonparametric low-rank estimation strategies in reproducing
kernel Hilbert spaces (RKHS) based on a multilinear spectral regularization have been
proposed in [23, 24]. Their method is based on estimating the regression function in the
tensor product of RKHSs and is naturally adapted for tensor covariates. A greedy algorithm
to solve a low-rank tensor learning problem has been proposed in [2] in the context of
multivariate spatio-temporal data analysis. The linear model they assume is different from
the one we propose and is specifically designed for spatio-temporal data. A higher-order
extension of partial least squares (HOPLS) has been proposed in [28] along with a kernel
extension in [29]. While HOPLS has the advantage of taking the tensor structure of the
input into account, the questions of approximation and generalization guarantees were not
addressed in [28]. The generalization bound we provide is inspired from works on matrix
and tensor completion [25, 19].
2
Preliminaries
We begin by introducing some notations. For any integer k we use [k] to denote the set of
integers from 1 to k. We use lower case bold letters for vectors (e.g. v ? Rd1 ), upper case bold
letters for matrices (e.g. M ? Rd1 ?d2 ) and bold calligraphic letters for higher order tensors
(e.g. T ? Rd1 ?d2 ?d3 ). The identity matrix will be written as I. The ith row (resp. column)
of a matrix M will be denoted by Mi,: (resp. M:,i ). This notation is extended to slices of a
tensor in the straightforward way. If v ? Rd1 and v0 ? Rd2 , we use v ? v0 ? Rd1 ?d2 to denote
the Kronecker product between vectors, and its straightforward extension to matrices and
tensors. Given a matrix M ? Rd1 ?d2 , we use vec(M) ? Rd1 ?d2 to denote the column vector
obtained by concatenating the columns of M.
2.1
Tensors and Tucker Decomposition
We first recall basic definitions of tensor algebra; more details can be found in [13]. A tensor
T ? Rd1 ?????dp can simply be seen as a multidimensional array (T i1 ,??? ,ip : in ? [dn ], n ? [p]).
The mode-n fibers of T are the vectors obtained by fixing all indices except the nth one,
e.g. T :,i2 ,??? ,ip ? Rd1 . The nth mode matricization of T is the matrix having the mode-n
fibers of T for columns and is denoted by T(n) ? Rdn ?d1 ???dn?1 dn+1 ???dp . The vectorization of
a tensor is defined by vec(T ) = vec(T(1) ). The inner product between two tensors S and T
(of the same size) is defined by hS, T i = hvec(S), vec(T )i and the Frobenius norm is defined
by kT k2F = hT , T i. In the following T always denotes a tensor of size d1 ? ? ? ? ? dp .
The mode-n matrix product of the tensor T and a matrix X ? Rm?dn is a tensor denoted
by T ?n X. It is of size d1 ? ? ? ? ? dn?1 ? m ? dn+1 ? ? ? ? ? dp and is defined by the
relation Y = T ?n X ? Y(n) = XT(n) . The mode-n vector product of the tensor T and
a vector v ? Rdn is a tensor defined by T ?n v = T ?n v> ? Rd1 ?????dn?1 ?dn+1 ?????dp .
The mode-n rank of T is the dimension of the space spanned by its mode-n fibers, that is
rankn (T ) = rank(T(n) ). The multilinear rank of T , denoted by rank(T ), is the tuple of
mode-n ranks of T : rank(T ) = (R1 , ? ? ? , Rp ) where Rn = rankn (T ) for n ? [p]. We will
write rank(T ) ? (S1 , ? ? ? , Sp ) whenever rank1 (T ) ? S1 , rank2 (T ) ? S2 , ? ? ? , rankp (T ) ? Sp .
The Tucker decomposition decomposes a tensor T into a core tensor G transformed by
an orthogonal matrix along each mode: (i) T = G ?1 U1 ?2 U2 ?3 ? ? ? ?p Up , where
G ? RR1 ?R2 ?????Rp , Ui ? Rdi ?Ri and U>
i Ui = I for all i ? [p]. The number of parameters
involved in a Tucker decomposition can be considerably smaller than d1 d2 ? ? ? dp . We have
the following identities when matricizing and vectorizing a Tucker decomposition: T(n) =
Un G(n) (Up ? ? ? ? ? Un+1 ? Un?1 ? ? ? ? ? U1 )> and vec(T ) = (Up ? Up?1 ? ? ? ? ? U1 )vec(G).
It is well known that T admits the Tucker decomposition (i) iff rank(T ) ? (R1 , ? ? ? , Rp )
(see e.g. [13]). Finding an exact Tucker decomposition can be done using the higher-order
SVD algorithm (HOSVD) introduced by [5]. Although finding the best approximation of
2
multilinear rank (R1 , ? ? ? , Rp ) of a tensor T is a difficult problem, the truncated HOSVD
algorithm provides good approximation guarantees and often performs well in practice.
2.2
Low-Rank Regression
Multivariate regression is the task of recovering a function f : Rd ? Rp from a set of inputoutput pairs {(x(n) , y(n) )}N
n=1 sampled from the model with an additive noise y = f (x) + ?,
where ? is the error term. To solve this problem, the ordinary least squares (OLS) approach
assumes a linear dependence between input and output data and boils down to finding
a matrix W ? Rd?p that minimizes the squared error kXW ? Yk2F , where X ? RN ?d
and Y ? RN ?p denote the input and the output matrices. To prevent overfitting and to
avoid numerical instabilities a ridge regularization term (i.e. ?kWk2F ) is often added to the
objective function, leading to the regularized least squares (RLS) method. It is easy to see
that the OLS/RLS approach in the multivariate setting is equivalent to performing p linear
regressions for each scalar output {yj }pj=1 independently. Thus it performs poorly when the
outputs are correlated and the true dimension of the response is less than p. Low-rank
regression (or reduced-rank regression) addresses this issue by solving the rank penalized
problem minW?Rd?p kXW ? Yk2F + ?kWk2F s.t. rank(W) ? R for a given integer R. The
rank constraint was first proposed in [1], whereas the term reduced-rank regression was
introduced in [10]. Adding a ridge regularization was proposed in [18]. In the rest of the
paper we will refer to this approach as low-rank regression (LRR). For more description
and discussion of reduced-rank regression, we refer the reader to the books [21] and [11].
3
3.1
Low-Rank Regression for Tensor-Valued Functions
Problem Formulation
We consider a multivariate regression task where the input is a vector and the response has
a tensor structure. Let f : Rd0 ? Rd1 ?d2 ?????dp be the function we want to learn from a
sample of input-output data {(x(n) , Y (n) )}N
n=1 drawn from the model Y = f (x) + E, where E
is an error term. We assume that f is linear, that is f (x) = W ?1 x for some regression tensor
>
W ? Rd0 ?d1 ?????dp . The vectorization of this relation leads to vec(f (x)) = W(1)
x showing
that this model is equivalent to the standard multivariate linear model. One way to tackle
this regression task would be to vectorize each output sample and to perform a standard
d0
low-rank regression on the data {(x(n) , vec(Y (n) ))}N
? Rd1 ???dp . A major drawback
n=1 ? R
of this approach is that the tensor structure of the output is lost in the vectorization step.
The low-rank model tries to capture linear dependencies between components of the output
but it ignores higher level dependencies that could be present in a tensor-structured output.
For illustration, suppose the output is a matrix encoding the samples of d1 continuous
variables at d2 different time steps, one could expect structural relations between the d1 time
series, e.g. linear dependencies between the rows of the output matrix.
Low-rank regression for tensor responses. To overcome the limitation described above
we propose an extension of the low-rank regression method for tensor-structured responses
by enforcing low multilinear rank of the regression tensor W. Let {(x(n) , Y (n) )}N
n=1 ?
Rd0 ? Rd1 ?d2 ?????dp be a training sample of input/output data drawn from the model
f (x) = W ?1 x + E where W is assumed of low multilinear rank. Considering the framework
of empirical risk minimization, we want to find a low-rank regression tensor W minimizing
the loss on the training data. To avoid numerical instabilities and to prevent overfitting
we add a ridge regularization to the objective function, leading to the minimization of
PN
(n)
, Y (n) ) + ?kWk2F w.r.t. the regression tensor W subject to the constraint
n=1 `(W ?1 x
rank(W) ? (R0 , R1 , ? ? ? , Rp ) for some given integers R0 , R1 , ? ? ? , Rp and where ` is a loss
function. In this paper, we consider the squared error loss between tensors defined by
L(T , T? ) = kT ? T? k2F . Using this loss we can rewrite the minimization problem as
min
W?Rd0 ?d1 ?????dp
kW ?1 X ? Yk2F + ?kWk2F
3
s.t. rank(W) ? (R0 , R1 , ? ? ? , Rp ),
(1)
Figure 1: Image reconstruction from noisy measurements: Y = W ?1 x + E where W is a
color image (RGB). Each image is labeled with the algorithm and the rank parameter.
where the input matrix X ? RN ?d0 and the output tensor Y ? RN ?d1 ?????dp are defined by
Xn,: = (x(n) )> , Y n,:,??? ,: = Y (n) for n = 1, ? ? ? , N (Y is the tensor obtained by stacking the
output tensors along the first mode).
Low-rank regression function. Let W ? be a solution of problem (1), it follows from
the multilinear rank constraint that W ? = G ?1 U0 ?2 ? ? ? ?p+1 Up for some core tensor
G ? RR0 ?????Rp and orthogonal matrices Ui ? Rdi ?Ri for 0 ? i ? p. The regression function
f ? : x 7? W ? ?1 x can thus be written as f ? : x 7? G ?1 x> U0 ?2 ? ? ? ?p+1 Up .
This implies several interesting properties. First, for any x ? Rd0 we have f ? (x) = T x ?1
?
U1 ?2 ? ? ? ?p Up with T x = G ?1 U>
0 x, which implies rank(f (x)) ? (R1 , ? ? ? , Rp ), that is
?
the image of f is a set of tensors with low multilinear rank. Second, the relation between
x and Y = f ? (x) is explained in a low dimensional subspace of size R0 ? R1 ? ? ? ? ? Rp .
Indeed one can decompose the mapping f ? into the following steps: (i) project x in RR0 as
?
? = U>
? , (iii) project back into the
x
0 x, (ii) perform a low-dimensional mapping Y = G ?1 x
?
output space to get Y = Y ?1 U1 ?2 ? ? ? ?p Up .
To give an illustrative intuition on the differences between matrix and multilinear rank
regularization we present a simple experiment1 in Figure 1. We generate data from the model
Y = W ?1 x + E where the tensor W ? R3?m?n is a color image of size m ? n encoded
with three color channels RGB. The components of both x and E are drawn from N (0, 1).
This experiment allows us to visualize the tensors returned by RLS, LRR and our method
HOLRR that enforces low multilinear rank of the regression function. First, this shows
that the function learned by vectorizing the outputs and performing LRR does not enforce
any low-rank structure. This is well illustrated in (Figure 1) where the regression tensors
returned by HOLRR-(3,1,1) are clearly of low-rank while the ones returned by LRR-1 are
not. This also shows that taking into account the low-rank structure of the model allows
one to better eliminate the noise when the true regression tensor is of low rank (Figure 1,
left). However if the ground truth model does not have a low-rank structure, enforcing low
mutlilinear rank leads to underfitting for low values of the rank parameter (Figure 1, right).
3.2
Higher-Order Low-Rank Regression and its Kernel Extension
We now propose an efficient algorithm to tackle problem (1). We first show that the ridge
? ? R(N +d0 )?d0
regularization term in (1) can be incorporated in the data fitting term. Let X
>
? ? R(N +d0 )?d1 ?????dp be defined by X
? > = (X | ?I) and Y
? > = Y(1) | 0 > . It is
and Y
(1)
? 2 . Minimization
? ? Yk
easy to check that the objective function in (1) is equal to kW ?1 X
F
problem (1) is then equivalent to
? 2
? ? Yk
kW ?1 X
F
min
G?R
R0 ?R1 ?????Rp
s.t. W = G ?1 U0 ? ? ? ?p+1 Up , U>
i Ui = I for all i. (2)
,
Ui ?Rdi ?Ri for 0?i?p
We now show that this minimization problem can be reduced to finding p + 1 projection
matrices onto subspaces of dimension R0 , R1 , ? ? ? , Rp . We start by showing that the core
tensor G solution of (2) is determined by the factor matrices U0 , ? ? ? , Up .
1
An extended version of this experiment is presented in supplementary material B.
4
Theorem 1. For given orthogonal matrices U0 , ? ? ? , Up the tensor G that minimizes (2) is
? ?1 (U> X
? > XU
? 0 )?1 U> X
? > ?2 U> ?3 ? ? ? ?p+1 U> .
given by G = Y
p
0
0
1
It follows from Theorem 1 that problem (1) can be written as
min
Ui ?Rdi ?Ri ,0?i?p
? ?1 ?0 ?2 ? ? ? ?p+1 ?p ? Yk
? 2
kY
F
(3)
?1 > T
>?>?
?
? , ?i = Ui U> for i ? 1.
subject to U>
U0 X
i Ui = I for all i, ?0 = XU0 U0 X XU0
i
? 0
Note that ?0 is the orthogonal projection onto the space spanned by the columns of XU
and ?i is the orthogonal projection onto the column space of Ui for i ? 1. Hence solving
problem (1) is equivalent to finding p + 1 low-dimensional subspaces U0 , ? ? ? , Up such that
? onto the spaces XU
?
? 0 , U1 , ? ? ? , Up along the corresponding modes is close to Y.
projecting Y
HOLRR algorithm. Since solving problem (3) for the p + 1 projections simultaneously is
a difficult non-convex optimization problem we propose to solve it independently for each projection. This approach has the benefits of both being computationally efficient and providing
good theoretical approximation guarantees (see Theorem 2). The following proposition gives
the analytic solutions of (3) when each projection is considered independently.
Proposition 1. For 0 ? i ? p, using the definition of ?i in (3), the optimal solution
? ?i+1 ?i ? Yk
? 2 s.t. U> Ui = I is given by the top Ri eigenvectors of
of minUi ?Rdi ?Ri kY
i
F
>
?1
>
>
? X)
?
? Y
? if i = 0 and Y
? (1) Y
? X
? (i+1) Y
?>
(X
X
(1)
(i+1) otherwise.
The results from Theorem 1 and Proposition 1 can be rewritten in terms of the original input
? ?1 X
? >X
? = X> X + ?I, Y
? > = Y ?1 X>
matrix X and output tensor Y using the identities X
? (i) Y
? > = Y(i) Y> for any i ? 1. The overall Higher-Order Low-Rank Regression
and Y
(i)
(i)
procedure (HOLRR) is summarized in Algorithm 1. Note that the Tucker decomposition
of the solution returned by HOLRR could be a good initialization point for an Alternative
Least Square method. However, studying the theoretical and experimental properties of this
approach is beyond the scope of this paper and is left for future work.
HOLRR Kernel Extension We now design a kernelized version of the HOLRR algorithm
by analyzing how it would be instantiated in a feature space. We show that all the steps
involved can be performed using the Gram matrix of the input data without having to
explicitly compute the feature map. Let ? : Rd0 ? RL be a feature map and let ? ? RN ?L
be the matrix with rows ?(x(n) )> for n ? [N ]. The higher-order low-rank regression problem
in the feature space boils down to the minimization problem
min
W?RL?d1 ?????dp
kW ?1 ? ? Yk2F + ?kWk2F
s.t. rank(W) ? (R0 , R1 , ? ? ? , Rp ) . (4)
Following the HOLRR algorithm, one needs to compute the top R0 eigenvectors of the L ? L
>
matrix (?> ? + ?I)?1 ?> Y(1) Y(1)
?. The following proposition shows that this can be done
using the Gram matrix K = ??> without explicitly knowing the feature map ?.
Proposition 2. If ? ? RN is an eigenvector with eigenvalue ? of the matrix (K +
>
?I)?1 Y(1) Y(1)
K, then v = ?> ? ? RL is an eigenvector with eigenvalue ? of the ma>
trix (?> ? + ?I)?1 ?> Y(1) Y(1)
?.
>
Let A be the top R0 eigenvectors of the matrix (K + ?I)?1 Y(1) Y(1)
K. When working with
the feature map ?, it follows from the previous proposition that line 1 in Algorithm 1 is
equivalent to choosing U0 = ?> A ? RL?R0 , while the updates in line 3 stay the same.
The regression tensor W ? RL?d1 ?????dp returned by this algorithm is then equal to W =
?1
>
>
>
>
>
Y ?1 P?2 U1 U>
A> ??> .
1 ?2 ? ? ??p+1 Up Up , where P = ? A A ?(? ? + ?I)? A
?1
It is easy to check that P can be rewritten as P = ?> A A> K(K + ?I)A
A> K.
Suppose now that the feature map ? is induced by a kernel k : Rd0 ? Rd0 ? R. The
prediction for an input vector x is then given by W ?1 x = C ?1 kx where the nth component
5
Algorithm 1 HOLRR
N ?d0
Algorithm 2 Kernelized HOLRR
Input: Gram matrix K ? RN ?N , Y ?
RN ?d1 ?????dp , rank (R0 , R1 , ? ? ? , Rp )
and regularization parameter ?.
1: A ? top R0 eigenvectors of
>
(K + ?I)?1 Y(1) Y(1)
K
2: for i = 1 to p do
>
3: Ui ? top Ri eigenvec. of Y(i+1) Y(i+1)
4: end for
?1 >
5: M ? A> K(K + ?I)A
A K
6: G ? Y ?1 M ?2 U>
?
?
?
?
?p+1 U>
3
p
1
7: return C = G ?1 A?2 U1 ?3 ? ? ??p+1 Up
N ?d1 ?????dp
Input: X ? R
, Y ? R
,
rank (R0 , R1 , ? ? ? , Rp ) and regularization
parameter ?.
1: U0 ? top R0 eigenvectors of
>
(X> X + ?I)?1 X> Y(1) Y(1)
X
2: for i = 1 to p do
>
3: Ui ? top Ri eigenvec. of Y(i+1) Y(i+1)
4: end for
?1 > >
>
5: M = U>
U0 X
0 (X X + ?I)U0
>
6: G ? Y ?1 M ?2 U>
?
?
?
?
?
3
p+1 Up
1
7: return G ?1 U0 ?2 ? ? ? ?p+1 Up
of kx ? RN is h?(x(n) ), ?(x)i = k(x(n) , x) and the tensor C ? RN ?d1 ?????dp is defined by C =
?1 >
G ?1 A?2 U1 ?2 ? ? ??p+1 Up , with G = Y ?1 A> K(K + ?I)A
A K?2 U>
2 ?3 ? ? ??p+1 Up .
Note that C has multilinear rank (R0 , ? ? ? , Rp ), hence the low mutlilinear rank constraint on
W in the feature space translates into the low rank structure of the coefficient tensor C.
Let H be the reproducing kernel Hilbert space associated with the kernel k. The overall procedure for kernelized HOLRR is summarized in Algorithm 2. This algorithm returns the tensor
PN
C ? RN ?d1 ?????dp defining the regression function f : x 7? C ?1 kx = n=1 k(x, x(n) )C (n) ,
where C (n) = C n:???: ? Rd1 ?????dp .
3.3
Theoretical Analysis
Complexity analysis. HOLRR is a polynomial time algorithm, more precisely it has a
time complexity in O((d0 )3 + N ((d0 )2 + d0 d1 ? ? ? dp ) + maxi?0 Ri (di )2 + N d1 ? ? ? dp maxi?1 di ).
In comparison, LRR has a time complexity in O((d0 )3 + N ((d0 )2 + d0 d1 ? ? ? dp ) + (N +
R)(d1 ? ? ? dp )2 ). Since the complexity of HOLRR only have a linear dependence on the
product of the output dimensions instead of a quadratic one for LRR, we can conclude
that HOLRR will be more efficient than LRR when the output dimensions d1 , ? ? ? , dp are
large. It is worth mentioning that the method proposed in [22] to solve a convex relaxation
of problem 2 is an iterative algorithm that needs to compute SVDs of matrices of size
di ? d1 ? ? ? di?1 di+1 ? ? ? dp for each 0 ? i ? p at each iteration, it is thus computationally more
expensive than HOLRR. Moreover, since HOLRR only relies on simple linear algebra tools,
readily available methods could be used to further improve the speed of the algorithm, e.g.
randomized-SVD [8] and random feature approximation of the kernel function [12, 20].
Approximation guarantees. It is easy to check that problem (1) is NP-hard since it
generalizes the problem of fitting a Tucker decomposition [9]. The following theorem shows
that HOLRR is a (p + 1)-approximation algorithm for this problem. This result generalizes
the approximation guarantees provided by the truncated HOSVD algorithm for the problem
of finding the best low multilinear rank approximation of an arbitrary tensor.
Theorem 2. Let W ? be a solution of problem (1) and let W be the regression tensor
returned by Algorithm 1. If L : Rd0 ?????dp ? R denotes the objective function of (1) w.r.t.
W then L(W) ? (p + 1)L(W ? ).
Generalization Bound. The following theorem gives an upper bound on the excessrisk for the function class F = {x 7? W ?1 x : rank(W) ? (R0 , ? ? ? , Rp )} of tensor-valued
regression functions with bounded multilinear rank. Recall that the expected loss of an
hypothesis h ? F w.r.t. the target function f ? is defined by R(h) = Ex [L(h(x), f ? (x))] and
PN
?
its empirical loss by R(h)
= N1 n=1 L(h(x(n) ), f ? (x(n) )).
d1 ?????dp
Theorem
? R be a loss function satisfying L(A, B) =
P 3. Let L : R
1
`(A
,
B
)
for
some loss-function ` : R ? R+ bounded by M . Then
i1 ,??? ,ip
i1 ,??? ,ip
i1 ,??? ,ip
d1 ???dp
for any ? > 0, with probability at least 1 ? ? over the choice of a sample of size N , the follow-
6
r
4e(p+2)d0 d1 ???dp
?
log(N )/N +
ing inequality holds for all h ? F: R(h) ? R(h)
+ M 2D log
maxi?0 di
q
Pp
M log 1? /(2N ), where D = R0 R1 ? ? ? Rp + i=0 Ri di .
Proof. (Sketch) The complete proof is given in the supplementary material. It relies
on bounding the pseudo-dimension of the class of real-valued
functions F? =
(x, i1 , ? ? ? , ip ) 7? (W ?1 x)i1 ,??? ,ip : rank(W) = (R0 , ? ? ? , Rp ) . Weshow that the pseudoPp
4e(p+2)d0 d1 ???dp
dimension of F? is upper bounded by (R0 R1 ? ? ? Rp +
Ri di ) log
. This
i=0
maxi?0 di
is done by leveraging the following result originally due to [26]: the number of sign patterns
of r polynomials, each of degree at most d, over q variables is at most (4edr/q)q for all
r > q > 2 [25, Theorem 2]. The rest of the proof consists in showing that the risk (resp.
empirical risk) of hypothesis in F and F? are closely related and invoking standard error
generalization bounds in terms of the pseudo-dimension [17, Theorem 10.6].
Note that generalization bounds based on the pseudo-dimension
for multivariate regression
p
without low-rank constraint would involve a term in O( d0 d1 ? ? ? dp ). In contrast, the bound
from the previous
p theorem only depends on the product of the output dimensions in a term
bounded by O( log(d1 ? ? ? dp )). In some sense, taking into account the low mutlilinear rank
of the hypothesis
allows uspto significantly
p
Preduce the
P dependence on the output dimensions
from O( d0 ? ? ? dp ) to O( (R0 ? ? ? Rp + i Ri di )( i log(di ))).
4
Experiments
In this section, we evaluate HOLRR on both synthetic and real-world datasets. Our
experimental results are for tensor-structured output regression problems on which we report
root mean-squared errors (RMSE) averaged across all the outputs. We compare HOLLR
with the following methods: regularized least squares RLS, low-rank regression LRR
described in Section 2.2, a multilinear approach based on tensor trace norm regularization
ADMM [7, 22], a nonconvex multilinear multitask learning approach MLMT-NC [22], an
higher order extension of partial least squares HOPLS [28] and the greedy tensor approach
for multivariate spatio-temporal analysis Greedy [2].
For experiments with kernel algorithms we use the readily available kernelized RLS and the
LRR kernel extension proposed in [18]. Note that ADMM, MLMT-NC and Greedy only
consider a linear dependency between inputs and outputs. The greedy tensor algorithm
proposed in [2] is developed specially for spatio-temporal data and the implementation
provided by the authors is restricted to third-order tensors. Although MLMLT-NC is
perhaps the closest algorithm to ours, we applied it only to simulated data. This is because
MLMLT-NC is computationally very expensive and becomes intractable for large data sets.
Average running times are reported in supplementary material B.
4.1
Synthetic Data
We generate both linear and nonlinear data. Linear data is drawn from the model Y =
W ?1 x + E where W ? R10?10?10?10 is a tensor of multilinear rank (6, 4, 4, 8) drawn at
random, x ? R10 is drawn from N (0, I), and each component of the error tensor E is drawn
from N (0, 0.1). Nonlinear data is drawn from Y = W ?1 (x?x)+E where W ? R25?10?10?10
is of rank (5, 6, 4, 2) and x ? R5 and E are generated as above. Hyper-parameters for all
algorithms are selected using 3-fold cross-validation on the training data.
These experiments have been carried out for different sizes of the training data set, 20 trials
have been executed for each size. The average RMSEs on a test set of size 100 for the 20
trials are reported in Figure 2. We see that HOLRR algorithm clearly outperforms the other
methods on the linear data. MLMT-NC method achieved the second best performance, it is
however much more computationally expensive (see Table 1 in supplementary material B).
On the nonlinear data LRR achieves good performances but HOLRR is still significantly
more accurate, especially with small training datasets.
7
Figure 2: Average RMSE as a function of the training set size: (left) linear data, (middle)
nonlinear data, (right) for different values of the rank parameter.
Table 1: RMSE on forecasting task.
Data set
ADMM
Greedy
HOPLS
HOLRR
K-HOLRR
(poly)
K-HOLRR
(rbf)
CCDS
Foursquare
Meteo-UK
0.8448
0.1407
0.6140
0.8325
0.1223
?
0.8147
0.1224
0.625
0.8096
0.1227
0.5971
0.8275
0.1223
0.6107
0.7913
0.1226
0.5886
To see how sensitive HOLLR is w.r.t. the choice of the multilinear rank, we carried out a
similar experiment comparing HOLLR performances for different values of the rank parameter,
see Fig. 2 (right). In this experiment, the rank of the tensor W used to generate the data is
(2, 2, 2, 2) while the input and output dimensions and the noise level are the same as above.
4.2
Real Data
We evaluate our algorithm on a forecasting task on the following real-world data sets:
CCDS: the comprehensive climate data set is a collection of climate records of North America
from [15]. The data set contains monthly observations of 17 variables such as Carbon dioxide
and temperature spanning from 1990 to 2001 across 125 observation locations.
Foursquare: the Foursquare data set [14] contains users? check-in records in Pittsburgh
area categorized by different venue types such as Art & University. It records the number of
check-ins by 121 users in each of the 15 category of venues over 1200 time intervals.
Meteo-UK: The data set is collected from the meteorological office of the UK2 . It contains
monthly measurements of 5 variables in 16 stations across the UK from 1960 to 2000.
The forecasting task consists in predicting all variables at times t + 1,. . . , t + k from their
values at times t ? 2, t ? 1 and t. The first two real data sets were used in [2] with k = 1 (i.e.
outputs are matrices). We consider here the same setting for these two data sets. For the
third dataset we consider higher-order output tensors by setting k = 5. The output tensors
are thus of size respectively 17 ? 125, 15 ? 121 and 16 ? 5 ? 5 for the three datasets.
For all the experiments, we use 90% of the available data for training and 10% for testing.
All hyper-parameters are chosen by cross-validation. The average test RMSE over 10 runs
are reported in Table 1 (running times are reported in Table 1 in supplementary material B).
We see that HOLRR and K-HOLRR outperforms the other methods on the CCDS dataset
while being orders of magnitude faster for the kernelized version (0.61s vs. 75.47s for Greedy
and 235.73s for ADMM in average). On the Foursquare dataset HOLRR performs as well as
Greedy and on the Meteo-UK dataset K-HOLRR gets the best results with the RBF kernel
while being much faster than ADMM (1.66s vs. 40.23s in average).
5
Conclusion
We proposed a low-rank multilinear regression model for tensor-structured output data. We
developed a fast and efficient algorithm to tackle the multilinear rank penalized minimization
problem and provided theoretical guarantees. Experimental results showed that capturing
low-rank structure in the output data can help to improve tensor regression performance.
2
http://www.metoffice.gov.uk/public/weather/climate-historic/
8
Acknowledgments
We thank Fran?ois Denis and the reviewers for their helpful comments and suggestions. This
work was partially supported by ANR JCJC program MAD (ANR- 14-CE27-0002).
References
[1] T. W. Anderson. Estimating linear restrictions on regression coefficients for multivariate normal
distributions. Annals of Mathematical Statistics, 22:327?351, 1951.
[2] M. T. Bahadori, Q. R. Yu, and Y. Liu. Fast multivariate spatio-temporal analysis via low rank
tensor learning. In NIPS. 2014.
[3] P. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from
examples without local minima. Neural networks, 2(1):53?58, 1989.
[4] A. Cichocki, R. Zdunek, A.H. Phan, and S.I. Amari. Nonnegative Matrix and Tensor Factorizations. Wiley, 2009.
[5] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition.
SIAM journal on Matrix Analysis and Applications, 21(4):1253?1278, 2000.
[6] R. Foygel, M. Horrell, M. Drton, and J. D. Lafferty. Nonparametric reduced rank regression.
In NIPS, 2012.
[7] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via
convex optimization. Inverse Problems, 27(2):025010, 2011.
[8] N. Halko, P. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM, 53(2):217?288, 2011.
[9] C. J. Hillar and L. Lim. Most tensor problems are np-hard. JACM, 60(6):45, 2013.
[10] A. J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of Multivariate
Analysis, 5(2):248?264, 1975.
[11] A. J. Izenman. Modern Multivariate Statistical Techniques: Regression, Classification, and
Manifold Learning. Springer-Verlag, New York, 2008.
[12] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012.
[13] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review,
51(3):455?500, 2009.
[14] X. Long, L. Jin, and J. Joshi. Exploring trajectory-driven local geographic topics in foursquare.
In UbiComp, 2012.
[15] A. C. Lozano, H. Li, A. Niculescu-Mizil, Y. Liu, C. Perlich, J. Hosking, and N. Abe. Spatialtemporal causal modeling for climate change attribution. In KDD, 2009.
[16] H. Lu, K.N. Plataniotis, and A. Venetsanopoulos. Multilinear Subspace Learning: Dimensionality
Reduction of Multidimensional Data. CRC Press, 2013.
[17] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT, 2012.
[18] A. Mukherjee and J. Zhu. Reduced rank ridge regression and its kernel extensions. Statistical
analysis and data mining, 4(6):612?622, 2011.
[19] M. Nickel and V. Tresp. An analysis of tensor models for learning on structured data. In
Machine Learning and Knowledge Discovery in Databases, pages 272?287. Springer, 2013.
[20] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[21] G.C. Reinsel and R.P. Velu. Multivariate reduced-rank regression: theory and applications.
Lecture Notes in Statistics. Springer, 1998.
[22] B. Romera-Paredes, M. H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask
learning. In ICML, 2013.
[23] M. Signoretto, L. De Lathauwer, and J. K. Suykens. Learning tensors in reproducing kernel
hilbert spaces with multilinear spectral penalties. arXiv preprint arXiv:1310.4977, 2013.
[24] M. Signoretto, Q. T. Dinh, L. De Lathauwer, and J. K. Suykens. Learning with tensors: a
framework based on convex optimization and spectral regularization. Mach. Learn., 1?49, 2013.
[25] N. Srebro, N. Alon, and T. S. Jaakkola. Generalization error bounds for collaborative prediction
with low-rank matrices. In NIPS, 2004.
[26] Hugh E Warren. Lower bounds for approximation by nonlinear manifolds. Transactions of the
American Mathematical Society, 133(1):167?178, 1968.
[27] K. Wimalawarne, M. Sugiyama, and R. Tomioka. Multitask learning meets tensor factorization:
task imputation via convex optimization. In NIPS. 2014.
[28] Q. Zhao, C. F. Caiafa, D. P. Mandic, Z. C. Chao, Y. Nagasaka, N. Fujii, L. Zhang, and
A. Cichocki. Higher-order partial least squares (hopls). IEEE Trans. on Pattern Analysis and
Machine Intelligence, 35(7):1660?1673, 2012.
[29] Q. Zhao, Guoxu Z., T. Adal?, L. Zhang, and A. Cichocki. Kernel-based tensor partial least
squares for reconstruction of limb movements. In ICASSP, 2013.
[30] H. Zhou, L. Li, and H. Zhu. Tensor regression with applications in neuroimaging data analysis.
Journal of the American Statistical Association, 108(502):540?552, 2013.
9
| 6302 |@word h:1 multitask:3 trial:2 version:4 middle:1 polynomial:2 norm:3 paredes:1 d2:9 rgb:2 decomposition:11 invoking:1 solid:1 reduction:1 liu:2 series:1 contains:3 rkhs:1 ours:1 romera:1 outperforms:2 existing:1 comparing:1 written:3 readily:2 additive:1 numerical:2 kdd:1 analytic:1 designed:1 update:1 rd2:1 v:2 greedy:8 selected:1 intelligence:1 adal:1 ith:1 core:3 yamada:1 record:3 provides:2 location:1 denis:1 zhang:2 fujii:1 mathematical:2 along:5 dn:8 lathauwer:3 consists:2 fitting:2 underfitting:1 baldi:1 expected:1 indeed:1 multi:2 inspired:1 relying:1 gov:1 little:1 considering:1 increasing:1 becomes:1 spain:1 estimating:2 notation:2 bounded:5 begin:1 project:2 moreover:1 provided:3 minimizes:2 eigenvector:2 developed:2 finding:8 guarantee:8 temporal:7 pseudo:3 multidimensional:2 tackle:3 rm:1 uk:5 local:2 encoding:1 mach:1 analyzing:1 meet:1 initialization:1 challenging:1 mentioning:1 factorization:2 lrr:10 averaged:1 acknowledgment:1 enforces:2 yj:1 testing:1 practice:1 lost:1 procedure:2 pontil:1 area:3 empirical:3 adapting:1 significantly:2 projection:6 weather:1 get:2 onto:4 close:1 context:3 risk:3 instability:2 www:1 equivalent:6 map:6 reviewer:1 restriction:1 hillar:1 straightforward:2 attribution:1 independently:3 convex:10 formulate:2 recovery:1 array:1 spanned:2 handle:1 hvec:1 resp:3 target:1 suppose:2 annals:1 user:2 exact:1 kolda:1 hypothesis:3 expensive:3 satisfying:1 mukherjee:1 labeled:1 database:1 preprint:1 capture:2 svds:1 movement:1 marseille:2 yk:4 intuition:1 matricizations:1 ui:12 covariates:1 complexity:4 solving:3 rewrite:1 algebra:2 hachem:1 icassp:1 hosvd:3 fiber:3 america:1 univ:2 instantiated:1 fast:3 ubicomp:1 hyper:2 choosing:1 encoded:1 supplementary:6 valued:4 solve:4 otherwise:1 anr:2 amari:1 statistic:3 noisy:1 ip:7 advantage:1 eigenvalue:2 reconstruction:3 propose:5 interaction:1 product:8 caiafa:1 fr:1 iff:1 poorly:1 description:1 frobenius:1 validate:1 inputoutput:1 ky:2 extending:1 r1:15 help:1 derive:1 alon:1 completion:2 fixing:1 recovering:1 ois:1 implies:2 drawback:1 closely:1 bader:1 material:6 public:1 crc:1 generalization:8 preliminary:1 decompose:1 proposition:6 multilinear:29 extension:11 exploring:1 hold:1 considered:1 ground:1 normal:1 mapping:2 scope:1 visualize:1 major:1 achieves:1 estimation:1 sensitive:1 tool:1 moor:1 minimization:10 mit:1 clearly:2 always:1 avoid:2 pn:3 reinsel:1 zhou:1 jaakkola:1 office:1 venetsanopoulos:1 focus:1 rank:86 check:5 contrast:1 eigenvec:2 perlich:1 talwalkar:1 sense:1 rostamizadeh:1 helpful:1 cnrs:1 niculescu:1 gandy:1 eliminate:1 kernelized:6 relation:4 transformed:1 france:1 i1:6 issue:1 overall:2 classification:1 denoted:4 proposes:1 art:1 lif:2 equal:2 having:2 kw:4 r5:1 yu:1 k2f:2 rls:5 icml:1 future:1 np:2 report:1 modern:1 simultaneously:1 comprehensive:1 n1:1 drton:1 interest:1 investigate:1 mining:1 accurate:3 kt:2 tuple:1 partial:4 minw:1 orthogonal:5 causal:1 theoretical:7 column:6 modeling:1 ordinary:1 stacking:1 introducing:1 rdi:5 hopls:5 r25:1 vandewalle:1 reported:4 dependency:5 synthetic:4 considerably:1 hosking:1 recht:2 venue:2 randomized:1 siam:3 hugh:1 stay:1 probabilistic:1 aix:1 squared:3 book:1 american:2 velu:1 leading:2 return:3 zhao:2 li:2 account:4 de:4 bold:3 summarized:2 north:1 coefficient:2 explicitly:2 depends:1 performed:1 root:2 try:1 competitive:2 rank1:1 start:1 rmse:4 collaborative:1 square:11 efficiently:1 identification:1 lu:1 trajectory:1 worth:1 nagasaka:1 randomness:1 explain:1 whenever:1 definition:2 pp:1 tucker:8 involved:2 naturally:1 proof:4 mi:1 associated:1 boil:2 di:11 sampled:1 dataset:4 recall:2 color:3 lim:1 dimensionality:1 knowledge:1 hilbert:3 back:1 higher:10 originally:1 follow:1 response:6 modal:1 formulation:1 done:4 anderson:1 furthermore:1 working:1 sketch:1 tropp:1 nonlinear:6 meteorological:1 mode:11 perhaps:1 aung:1 scientific:1 true:2 geographic:1 lozano:1 regularization:10 hence:2 alternating:1 i2:1 illustrated:1 climate:4 lastname:1 illustrative:1 criterion:3 ridge:5 complete:1 performs:3 temperature:1 image:6 recently:1 ols:2 rl:5 extend:1 jcjc:1 rd0:9 martinsson:1 association:1 refer:2 measurement:2 monthly:2 dinh:1 vec:8 rr1:1 rr0:2 rd:3 minui:1 sugiyama:1 dot:1 v0:2 add:1 multivariate:14 closest:1 showed:1 driven:1 verlag:1 nonconvex:1 inequality:1 calligraphic:1 kar:1 seen:1 minimum:1 mr:1 r0:20 signal:1 u0:13 ii:1 d0:16 rahimi:1 ing:1 faster:2 cross:2 long:1 mandic:1 prediction:2 regression:52 basic:1 vision:1 arxiv:2 iteration:1 kernel:17 achieved:1 suykens:2 whereas:1 rabusseau:1 want:2 addressed:1 interval:1 singular:1 rest:2 unlike:1 specially:1 comment:1 subject:3 induced:1 leveraging:1 lafferty:1 integer:4 structural:1 joshi:1 leverage:1 iii:1 easy:4 inner:1 knowing:1 translates:1 forecasting:4 penalty:1 returned:6 york:1 eigenvectors:5 involve:1 nonparametric:2 category:1 reduced:10 generate:3 http:1 uk2:1 sign:1 write:1 drawn:8 d3:1 imputation:1 prevent:2 pj:1 r10:2 ht:1 relaxation:3 run:1 inverse:1 letter:3 extends:1 reader:1 fran:1 capturing:1 bound:9 fold:1 quadratic:1 encountered:1 nonnegative:1 adapted:1 occur:1 constraint:9 kronecker:1 precisely:1 ri:12 wimalawarne:1 u1:9 speed:1 min:4 performing:2 structured:7 smaller:1 across:3 s1:2 explained:1 projecting:1 restricted:1 computationally:6 dioxide:1 foygel:1 r3:1 end:2 studying:1 available:3 generalizes:2 rewritten:2 limb:1 spectral:3 enforce:1 bahadori:1 alternative:1 rkhss:1 rp:22 original:1 denotes:2 assumes:1 top:7 running:2 xu0:2 uspto:1 especially:1 society:1 tensor:88 objective:4 izenman:2 question:1 added:1 parametric:1 strategy:1 dependence:3 dp:34 subspace:6 thank:1 simulated:1 topic:1 manifold:2 vectorize:1 collected:1 spanning:1 enforcing:2 mad:1 index:1 illustration:1 providing:1 minimizing:1 nc:5 difficult:4 neuroimaging:2 executed:1 carbon:1 trace:2 stated:1 design:2 implementation:1 perform:2 bianchi:1 upper:3 observation:2 datasets:3 jin:1 truncated:2 defining:1 extended:2 incorporated:1 rn:12 reproducing:3 station:1 arbitrary:1 ccds:3 community:1 kxw:2 abe:1 introduced:2 pair:1 learned:1 barcelona:1 nip:6 trans:1 address:1 beyond:1 pattern:2 firstname:1 program:1 including:1 natural:1 rely:1 regularized:2 predicting:1 nth:3 mizil:1 zhu:2 improve:2 carried:2 cichocki:3 tresp:1 chao:1 review:1 vectorizing:2 discovery:1 loss:8 expect:1 historic:1 lecture:1 interesting:1 limitation:1 suggestion:1 srebro:1 nickel:1 penalization:1 validation:2 rmses:1 foundation:1 degree:1 berthouze:1 row:3 penalized:2 mohri:1 supported:1 warren:1 taking:3 benefit:1 slice:1 overcome:1 dimension:12 xn:1 gram:3 world:2 karnick:1 computes:3 ignores:1 author:1 collection:1 experiment1:1 yk2f:4 transaction:1 approximate:2 overfitting:2 pittsburgh:1 assumed:1 conclude:1 spatio:7 foursquare:5 un:3 vectorization:3 continuous:1 iterative:1 decomposes:1 table:4 matricization:1 learn:2 channel:1 correlated:1 hornik:1 investigated:1 poly:1 constructing:1 sp:2 aistats:1 s2:1 noise:3 kwk2f:5 bounding:1 kadri:1 categorized:1 xu:3 fig:1 wiley:1 tomioka:1 plataniotis:1 concatenating:1 third:2 rdn:2 down:2 theorem:11 xt:1 showing:3 maxi:4 r2:1 zdunek:1 admits:1 intractable:1 adding:1 rankn:2 magnitude:1 kx:3 spatialtemporal:1 phan:1 rd1:14 halko:1 simply:1 jacm:1 signoretto:2 trix:1 scalar:1 partially:1 u2:1 springer:3 truth:1 relies:2 ma:1 identity:3 rbf:2 admm:5 hard:2 change:1 specifically:1 except:1 determined:1 edr:1 principal:1 svd:2 experimental:3 guillaume:1 evaluate:2 d1:28 ex:1 |
5,862 | 6,303 | Architectural Complexity Measures of
Recurrent Neural Networks
Saizheng Zhang1,?, Yuhuai Wu2,? , Tong Che4 , Zhouhan Lin1 ,
Roland Memisevic1,5 , Ruslan Salakhutdinov3,5 and Yoshua Bengio1,5
1
MILA, Universit? de Montr?al, 2 University of Toronto, 3 Carnegie Mellon University,
4
Institut des Hautes ?tudes Scientifiques, France, 5 CIFAR
Abstract
In this paper, we systematically analyze the connecting architectures of recurrent
neural networks (RNNs). Our main contribution is twofold: first, we present a
rigorous graph-theoretic framework describing the connecting architectures of
RNNs in general. Second, we propose three architecture complexity measures of
RNNs: (a) the recurrent depth, which captures the RNN?s over-time nonlinear
complexity, (b) the feedforward depth, which captures the local input-output nonlinearity (similar to the ?depth? in feedforward neural networks (FNNs)), and (c)
the recurrent skip coefficient which captures how rapidly the information propagates over time. We rigorously prove each measure?s existence and computability.
Our experimental results show that RNNs might benefit from larger recurrent depth
and feedforward depth. We further demonstrate that increasing recurrent skip
coefficient offers performance boosts on long term dependency problems.
1
Introduction
Recurrent neural networks (RNNs) have been shown to achieve promising results on many difficult
sequential learning problems [1, 2, 3, 4, 5]. There is also much work attempting to reveal the
principles behind the challenges and successes of RNNs, including optimization issues [6, 7], gradient
vanishing/exploding related problems [8, 9], analysing/designing new RNN transition functional
units like LSTMs, GRUs and their variants [10, 11, 12, 13].
This paper focuses on another important theoretical aspect of RNNs: the connecting architecture.
Ever since [14, 15] introduced different forms of ?stacked RNNs?, researchers have taken architecture
design for granted and have paid less attention to the exploration of other connecting architectures.
Some examples include [16, 1, 17] who explored the use of skip connections; [18] who pointed out
the distinction of constructing a ?deep? RNN from the view of the recurrent paths and the view of the
input-to-hidden and hidden-to-output maps. However, they did not rigorously formalize the notion
of ?depth? and its implications in ?deep? RNNs. Besides ?deep? RNNs, there still remains a vastly
unexplored field of connecting architectures. We argue that one barrier for better understanding the
architectural complexity is the lack of a general definition of the connecting architecture. This forced
previous researchers to mostly consider the simple cases while neglecting other possible connecting
variations. Another barrier is the lack of quantitative measurements of the complexity of different
RNN connecting architectures: even the concept of ?depth? is not clear with current RNNs.
In this paper, we try to address these two barriers. We first introduce a general formulation of
RNN connecting architectures, using a well-defined graph representation. Observing that the RNN
undergoes multiple transformations not only feedforwardly (from input to output within a time step)
but also recurrently (across multiple time steps), we carry out a quantitative analysis of the number of
transformations in these two orthogonal directions, which results in the definitions of recurrent depth
?
Equal contribution.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
and feedforward depth. These two depths can be viewed as general extensions of the work of [18]. We
also explore a quantity called the recurrent skip coefficient which measures how quickly information
propagates over time. This quantity is strongly related to vanishing/exploding gradient issues, and
helps deal with long term dependency problems. Skip connections crossing different timescales have
also been studied by [19, 15, 20, 21]. Instead of specific architecture design, we focus on analyzing
the graph-theoretic properties of recurrent skip coefficients, revealing the fundamental difference
between the regular skip connections and the ones which truly increase the recurrent skip coefficients.
We rigorously prove each measure?s existence and computability under the general framework.
We empirically evaluate models with different recurrent/feedforward depths and recurrent skip
coefficients on various sequential modelling tasks. We also show that our experimental results further
validate the usefulness of the proposed definitions.
2
General Formulations of RNN Connecting Architectures
RNNs are learning machines that recursively compute new states by applying transition functions
to previous states and inputs. Its connecting architecture describes how information flows between
different nodes. In this section, we formalize the concept of the connecting architecture by extending
the traditional graph-based illustration to a more general definition with a finite directed multigraph
and its unfolded version. Let us first define the notion of the RNN cyclic graph Gc that can be viewed
as a cyclic graphical representation of RNNs. We attach ?weights? to the edges in the cyclic graph Gc
that represent time delay differences between the source and destination node in the unfolded graph.
Definition 2.1. Let Gc = (Vc , Ec ) be a weighted directed multigraph 2 , in which Vc = Vin ? Vout ?
Vhid is a finite nonempty set of nodes, Ec ? Vc ? Vc ? Z is a finite set of directed edges. Each
e = (u, v, ?) ? Ec denotes a directed weighted edge pointing from node u to node v with an integer
weight ?. Each node v ? Vc is labelled by an integer tuple (i, p). i ? {0, 2, ? ? ? m ? 1} denotes
the time index of the given node, where m is the period number of the RNN, and p ? S, where S is
a finite set of node labels. We call the weighted directed multigraph Gc = (Vc , Ec ) an RNN cyclic
graph, if (1) For every edge e = (u, v, ?) ? Ec , let iu and iv denote the time index of node u and v,
then ? = iv ? iu + k ? m for some k ? Z. (2) There exists at least one directed cycle 3 in Gc . (3)
For any closed walk ?, the sum of all the ? along ? is not zero.
Condition (1) assures that we can get a periodic graph (repeating pattern) when unfolding the RNN
through time. Condition (2) excludes feedforward neural networks in the definition by forcing to
have at least one cycle in the cyclic graph. Condition (3) simply avoids cycles after unfolding. The
cyclic representation can be seen as a time folded representation of RNNs, as shown in Figure 1(a).
Given an RNN cyclic graph Gc , we unfold Gc over time t ? Z by the following procedure:
Definition 2.2 (Unfolding). Given an RNN cyclic graph Gc = (Vc , Ec , ?), we define a new infinite
set of nodes Vun = {(i + km, p)|(i, p) ? V, k ? Z}. The new set of edges Eun ? Vun ? Vun is
constructed as follows: ((t, p), (t0 , p0 )) ? Eun if and only if there is an edge e = ((i, p), (i0 , p0 ), ?) ?
E such that t0 ? t = ?, and t ? i(mod m). The new directed graph Gun = (Vun , Eun ) is called
the unfolding of Gc . Any infinite directed graph that can be constructed from an RNN cyclic graph
through unfolding is called an RNN unfolded graph.
Lemma 2.1. The unfolding Gun of any RNN cyclic graph Gc is a directed acyclic graph (DAG).
Figure 1(a) shows an example of two graph representations Gun and Gc of a given RNN. Consider
the edge from node (1, 7) going to node (0, 3) in Gc . The fact that it has weight 1 indicates that the
corresponding edge in Gun travels one time step, ((t + 1, 7), (t + 2, 3)). Note that node (0, 3) also has
a loop with weight 2. This loop corresponds to the edge ((t, 3), (t + 2, 3)). The two kinds of graph
representations we presented above have a one-to-one correspondence. Also, any graph structure
? on Gun is naturally mapped into a graph structure ?? on Gc . Given an edge tuple e? = (u, v, ?)
in Gc , ? stands for the number of time steps crossed by e??s covering edges in Eun , i.e., for every
corresponding edge e ? Gun , e must start from some time index t to t + ?. Hence ? corresponds
to the ?time delay? associated with e. In addition, the period number m in Definition 2.1 can be
interpreted as the time length of the entire non-repeated recurrent structure in its unfolded RNN graph
Gun . In other words, shifting the Gun through time by km time steps will result in a DAG which is
2
3
A directed multigraph is a directed graph that allows multiple directed edges connecting two nodes.
A directed cycle is a closed walk with no repetitions of edges.
2
Figure 1: (a) An example of an RNN?s Gc and Gun . Vin is denoted by square, Vhid is denoted by circle and Vout
is denoted by diamond. In Gc , the number on each edge is its corresponding ?. The longest path is colored in
red. The longest input-output path is colored in yellow and the shortest path is colored blue. The value of three
measures are dr = 23 , df = 72 and s = 2. (b) 5 more examples. (1) and (2) have dr = 2, 23 , (3) has df = 5, (4)
and (5) has s = 2, 32 .
identical to Gun , and m is the smallest number that has such property for Gun . Most traditional RNNs
have m = 1, while some special structures like hierarchical or clockwork RNN [15, 21] have m > 1.
For example, Figure 1(a) shows that the period number of this specific RNN is 2.
The connecting architecture describes how information flows among RNN units. Assume v? ? Vc
is a node in Gc , let In(?
v ) denotes the set of incoming nodes of v?, In(?
v ) = {?
u|(?
u, v?) ? Ec }. In
the forward pass of the RNN, the transition function Fv? takes outputs of nodes In(?
v ) as inputs and
computes a new output. For example, vanilla RNNs units with different activation functions, LSTMs
and GRUs can all be viewed as units with specific transition functions. We now give the general
definition of an RNN:
Definition 2.3. An RNN is a tuple (Gc , Gun , {Fv? }v??Vc ), in which Gun = (Vun , Eun ) is the unfolding
of RNN cyclic graph Gc , and {Fv? }v??Vc is the set of transition functions. In the forward pass, for
each hidden and output node v ? Vun , the transition function Fv? takes all incoming nodes of v as the
input to compute the output.
An RNN is homogeneous if all the hidden nodes share the same form of the transition function.
3
Measures of Architectural Complexity
In this section, we develop different measures of RNNs? architectural complexity, focusing mostly
on the graph-theoretic properties of RNNs. To analyze an RNN solely from its architectural aspect,
we make the mild assumption that the RNN is homogeneous. We further assume the RNN to
be unidirectional. For a bidirectional RNN, it is more natural to measure the complexities of its
unidirectional components.
3.1
Recurrent Depth
Unlike feedforward models where computations are done within one time frame, RNNs map inputs
to outputs over multiple time steps. In some sense, an RNN undergoes transformations along both
feedforward and recurrent dimensions. This fact suggests that we should investigate its architectural
complexity from these two different perspectives. We first consider the recurrent perspective.
The conventional definition of depth is the maximum number of nonlinear transformations from inputs
to outputs. Observe that a directed path in an unfolded graph representation Gun corresponds to a
sequence of nonlinear transformations. Given an unfolded RNN graph Gun , ?i, n ? Z, let Di (n) be
the length of the longest path from any node at starting time i to any node at time i + n. From the
recurrent perspective, it is natural to investigate how Di (n) changes over time. Generally speaking,
Di (n) increases as n increases for all i. Such increase is caused by the recurrent structure of the RNN
which keeps adding new nonlinearities over time. Since Di (n) approaches ? as n approaches ?,4
to measure the complexity of Di (n), we consider its asymptotic behaviour, i.e., the limit of Din(n)
as n ? ?. Under a mild assumption, this limit exists. The following theorem prove such limit?s
computability and well-definedness:
Theorem 3.2 (Recurrent Depth). Given an RNN and its two graph representation Gun and Gc , we
denote C(Gc ) to be the set of directed cycles in Gc . For ? ? C(Gc ), let l(?) denote the length of ?
4
Without loss of generality, we assume the unidirectional RNN approaches positive infinity.
3
and ?s (?) denote the sum of edge weights ? along ?. Under a mild assumption5 ,
l(?)
Di (n)
= max
.
(1)
dr = lim
n?+?
n
??C(Gc ) ?s (?)
More intuitively, dr is a measure of the average maximum number of nonlinear transformations per
time step as n gets large. Thus, we call it recurrent depth:
Definition 3.1 (Recurrent Depth). Given an RNN and its two graph representations Gun and Gc ,
we call dr , defined in Eq.(1), the recurrent depth of the RNN.
In Figure 1(a), one can easily verify that Dt (1) = 5, Dt (2) = 6, Dt (3) = 8, Dt (4) = 9 . . . Thus
Dt (1)
= 5, Dt2(2) = 3, Dt3(3) = 83 , Dt4(4) = 94 . . . ., which eventually converges to 32 as n ? ?. As
1
n increases, most parts of the longest path coincides with the path colored in red. As a result, dr
coincides with the number of nodes the red path goes through per time step. Similarly in Gc , observe
that the red cycle achieves the maximum ( 32 ) in Eq.(1). Usually, one can directly calculate dr from
Gun . It is easy to verify that simple RNNs and stacked RNNs share the same recurrent depth which is
equal to 1. This reveals the fact that their nonlinearities increase at the same rate, which suggests
that they will behave similarly in the long run. This fact is often neglected, since one would typically
consider the number of layers as a measure of depth, and think of stacked RNNs as ?deep? and simple
RNNs as ?shallow?, even though their discrepancies are not due to recurrent depth (which regards
time) but due to feedforward depth, defined next.
3.3
Feedforward Depth
Recurrent depth does not fully characterize the nature of nonlinearity of an RNN. As previous work
suggests [3], stacked RNNs do outperform shallow ones with the same hidden size on problems
where a more immediate input and output process is modeled. This is not surprising, since the growth
rate of Di (n) only captures the number of nonlinear transformations in the time direction, not in
the feedforward direction. The perspective of feedforward computation puts more emphasis on the
specific paths connecting inputs to outputs. Given an RNN unfolded graph Gun , let D?i (n) be the
length of the longest path from any input node at time step i to any output node at time step i + n.
Clearly, when n is small, the recurrent depth cannot serve as a good description for D?i (n). In fact. it
heavily depends on another quantity which we call feedforward depth. The following proposition
guarantees the existence of such a quantity and demonstrates the role of both measures in quantifying
the nonlinearity of an RNN.
Proposition 3.3.1 (Input-Output Length Least Upper Bound). Given an RNN with recurrent
depth dr , we denote df = supi,n?Z D?i (n) ? n ? dr , the supremum df exists and thus we have the
following upper bound for D?i (n):
D?i (n) ? n ? dr + df .
The above upper bound explicitly shows the interplay between recurrent depth and feedforward
depth: when n is small, D?i (n) is largely bounded by df ; when n is large, dr captures the nature
of the bound (? n ? dr ). These two measures are equally important, as they separately capture the
maximum number of nonlinear transformations of an RNN in the long run and in the short run.
Definition 3.2. (Feedforward Depth) Given an RNN with recurrent depth dr and its two graph
representations Gun and Gc , we call df , defined in Proposition 3.3.1, the feedforward depth6 of the
RNN.
The following theorem proves df ?s computability:
Theorem 3.4 (Feedforward Depth). Given an RNN and its two graph representations Gun and Gc ,
we denote ?(Gc ) the set of directed paths that start at an input node and end at an output node in Gc .
For ? ? ?(Gc ), denote l(?) the length and ?s (?) the sum of ? along ?. Then we have:
df = sup D?i (n) ? n ? dr = max l(?) ? ?s (?) ? dr ,
???(Gc )
i,n?Z
where m is the period number and dr is the recurrent depth of the RNN.
For example, in Figure 1(a), one can easily verify that df = D?t (0) = 3. Most commonly, df is the
same as D?t (0), i.e., the maximum length from an input to its current output.
5
See a full treatment of the limit in general cases in Theorem A.1 and Proposition A.1.1 in Appendix.
Conventionally, an architecture with depth 1 is a three-layer architecture containing one hidden layer. But in
our definition, since it goes through two transformations, we count the depth as 2 instead of 1. This should be
particularly noted with the concept of feedforward depth, which can be thought as the conventional depth plus 1.
6
4
3.5
Recurrent Skip Coefficient
Depth provides a measure of the complexity of the model. But such a measure is not sufficient to
characterize behavior on long-term dependency tasks. In particular, since models with large recurrent
depths have more nonlinearities through time, gradients can explode or vanish more easily. On the
other hand, it is known that adding skip connections across multiple time steps may help improve
the performance on long-term dependency problems [19, 20]. To measure such a ?skipping? effect,
we should instead pay attention to the length of the shortest path from time i to time i + n. In Gun ,
?i, n ? Z, let di (n) be the length of the shortest path. Similar to the recurrent depth, we consider the
growth rate of di (n).
Theorem 3.6 (Recurrent Skip Coefficient). Given an RNN and its two graph representations Gun
and Gc , under mild assumptions7
di (n)
l(?)
j = lim
= min
.
(2)
n?+?
n
??C(Gc ) ?s (?)
Since it is often the case that j is smaller or equal to 1, it is more intuitive to consider its reciprocal.
Definition 3.3. (Recurrent Skip Coefficient)8 . Given an RNN and corresponding Gun and Gc , we
define s = 1j , whose reciprocal is defined in Eq.(2), as the recurrent skip coefficient of the RNN.
With a larger recurrent skip coefficient, the number of transformations per time step is smaller. As a
result, the nodes in the RNN are more capable of ?skipping? across the network, allowing unimpeded
information flow across multiple time steps, thus alleviating the problem of learning long term
dependencies. In particular, such effect is more prominent in the long run, due to the network?s
recurrent structure. Also note that not all types of skip connections can increase the recurrent skip
coefficient. We will consider specific examples in our experimental results section.
4
Experiments and Results
In this section we conduct a series of experiments to investigate the following questions: (1) Is
recurrent depth a trivial measure? (2) Can increasing depth yield performance improvements? (3)
Can increasing the recurrent skip coefficient improve the performance on long term dependency
tasks? (4) Does the recurrent skip coefficient suggest something more compared to simply adding
skip connections? We show our evaluations on both tanh RNNs and LSTMs.
4.1
Tasks and Training Settings
PennTreebank dataset: We evaluate our models on character level language modelling using the
PennTreebank dataset [22]. It contains 5059k characters for training, 396k for validation and 446k
for test, and has a alphabet size of 50. We set each training sequence to have the length of 50. Quality
of fit is evaluated by the bits-per-character (BPC) metric, which is log2 of perplexity.
text8 dataset: Another dataset used for character level language modelling is the text8 dataset9 ,
which contains 100M characters from Wikipedia with an alphabet size of 27. We follow the setting
from [23] and each training sequence has length of 180.
adding problem: The adding problem (and the following copying memory problem) was introduced
in [10]. For the adding problem, each input has two sequences with length of T where the first
sequence are numbers sampled from uniform[0, 1] and the second sequence are all zeros except two
elements which indicates the position of the two elements in the first sequence that should be summed
together. The output is the sum. We follow the most recent results and experimental settings in [24]
(same for copying memory).
copying memory problem: Each input sequence has length of T + 20, where the first 10 values are
random integers between 1 to 8. The model should remember them after T steps. The rest of the
sequence are all zeros, except for the last 11 entries in the sequence, which starts with 9 as a marker
indicating that the model should begin to output its memorized values. The model is expected to
give zero outputs at every time step except the last 10 entries, where it should generate (copy) the 10
values in the same order as it has seen at the beginning of the sequence. The goal is to minimize the
average cross entropy of category predictions at each time step.
7
See Proposition A.3.1 in Appendix.
One would find this definition very similar to the definition of the recurrent depth. Therefore, we refer
readers to examples in Figure 1 for some illustrations.
9
http://mattmahoney.net/dc/textdata.
8
5
(1)
(a)
(a)
(3)
(2)
(b)
(b)
(4)
Figure 2: Left: (a) The architectures for sh, st, bu and td, with their (dr , df ) equal to (1, 2), (1, 3), (1, 3) and
(2, 3), respectively. The longest path in td are colored in red. (b) The 9 architectures denoted by their (df , dr )
with dr = 1, 2, 3 and df = 2, 3, 4. In both (a) and (b), we only plot hidden states at two adjacent time steps
and the connections between them (the period number is 1). Right: (a) Various architectures that we consider
in Section 4.4. From top to bottom are baseline s = 1, and s = 2, s = 3. (b) Proposed architectures that we
consider in Section 4.5 where we take k = 3 as an example. The shortest paths in (a) and (b) that correspond to
the recurrent skip coefficients are colored in blue.
DATASET
P ENN T REEBANK
TEXT 8
M ODELS \A RCHS
tanh RNN
tanh RNN- SMALL
tanh RNN- LARGE
LSTM- SMALL
LSTM- LARGE
sh
1.54
1.80
1.69
1.65
1.52
st
1.59
1.82
1.67
1.66
1.53
bu
1.54
1.80
1.64
1.65
1.52
td
1.49
1.77
1.59
1.63
1.49
df \dr
df = 2
df = 3
df = 4
dr = 1
1.88
1.86
1.85
dr = 2
1.86
1.84
1.86
dr = 3
1.86
1.86
1.88
Table 1: Left: Test BPCs of sh, st, bu, td for tanh RNNs and LSTMs. Right: Test BPCs of tanh RNNs with
recurrent depth dr = 1, 2, 3 and feedforward depth df = 2, 3, 4 respectively.
sequential MNIST dataset: Each MNIST image data is reshaped into a 784 ? 1 sequence, turning
the digit classification task into a sequence classification one with long-term dependencies [25, 24].
A slight modification of the dataset is to permute the image sequences by a fixed random order
beforehand (permuted MNIST). Results in [25] have shown that both tanh RNNs and LSTMs did not
achieve satisfying performance, which also highlights the difficulty of this task.
For all of our experiments we use Adam [26] for optimization, and conduct a grid search on the
learning rate in {10?2 , 10?3 , 10?4 , 10?5 }. For tanh RNNs, the parameters are initialized with
samples from a uniform distribution. For LSTM networks we adopt a similar initialization scheme,
while the forget gate biases are chosen by the grid search on {?5, ?3, ?1, 0, 1, 3, 5}. We employ
early stopping and the batch size was set to 50.
4.2
Recurrent Depth is Non-trivial
To investigate the first question, we compare 4 similar connecting architectures: 1-layer (shallow)
?sh?, 2-layers stacked ?st?, 2-layers stacked with an extra bottom-up connection ?bu?, and 2-layers
stacked with an extra top-down connection ?td?, as shown in Figure 2(a), left panel. Although the
four architectures look quite similar, they have different recurrent depths: sh, st and bu have dr = 1,
while td has dr = 2. Note that the specific construction of the extra nonlinear transformations in td is
not conventional. Instead of simply adding intermediate layers in hidden-to-hidden connection, as
reported in [18], more nonlinearities are gained by a recurrent flow from the first layer to the second
layer and then back to the first layer at each time step (see the red path in Figure 2a, left panel).
We first evaluate our architectures using tanh RNN on PennTreebank, where sh has hidden-layer
size of 1600. Next, we evaluate four different models for text8 which are tanh RNN-small, tanh
RNN-large, LSTM-small, LSTM large, where the model?s sh architecture has hidden-layer size of
512, 2048, 512, 1024 respectively. Given the architecture of the sh model, we set the remaining
three architectures to have the same number of parameters. Table 1, left panel, shows that the td
architecture outperforms all the other architectures for all the different models. Specifically, td in
tanh RNN achieves a test BPC of 1.49 on PennTreebank, which is comparable to the BPC of 1.48
reported in [27] using stabilization techniques. Similar improvements are shown for LSTMs, where td
architecture in LSTM-large achieves BPC of 1.49 on text8, outperforming the BPC of 1.54 reported
in [23] with Multiplicative RNN (MRNN). It is also interesting to note the improvement we obtain
when switching from bu to td. The only difference between these two architectures lies in changing
the direction of one connection (see Figure 2(a)), which also increases the recurrent depth. Such a
fundamental difference is by no means self-evident, but this result highlights the necessity of the
concept of recurrent depth.
6
4.3
Comparing Depths
From the previous experiment, we found some evidence that with larger recurrent depth, the performance might improve. To further investigate various implications of depths, we carry out a
systematic analysis for both recurrent depth dr and feedforward depth df on text8 and sequential
MNIST datasets. We build 9 models in total with dr = 1, 2, 3 and df = 2, 3, 4, respectively (as
shown in Figure 2(b)). We ensure that all the models have roughly the same number of parameters
(e.g., the model with dr = 1 and df = 2 has a hidden-layer size of 360).
Table 1, right panel, displays results on the text8 dataset. We observed that when fixing feedforward
depth df = 2, 3 (or fixing recurrent depth dr = 1, 2), increasing recurrent depth dr from 1 to
2 (or increasing feedforward depth df from 2 to 3) does improve the model performance. The
best test BPC is achieved by the architecture with df = 3, dr = 2. This suggests that reasonably
increasing dr and df can aid in better capturing the over-time nonlinearity of the input sequence.
However, for too large dr (or df ) like dr = 3 or df = 4, increasing df (or dr ) only hurts models
performance. This can potentially be attributed to the optimization issues when modelling large
input-to-output dependencies (see Appendix B.4 for more details). With sequential MNIST dataset,
we next examined the effects of df and dr when modelling long term dependencies (more in Appendix
B.4). In particular, we observed that increasing df does not bring any improvement to the model
performance, and increasing dr might even be detrimental for training. Indeed, it appears that df
only captures the local nonlinearity and has less effect on the long term prediction. This result seems
to contradict previous claims [17] that stacked RNNs (df > 1, dr = 1) could capture information in
different time scales and would thus be more capable of dealing with learning long-term dependencies.
On the other hand, a large dr indicates multiple transformations per time step, resulting in greater
gradient vanishing/exploding issues [18], which suggests that dr should be neither too small nor too
large.
4.4
Recurrent Skip Coefficients
To investigate whether increasing a recurrent skip coefficient s improves model performance on long
term dependency tasks, we compare models with increasing s on the adding problem, the copying
memory problem and the sequential MNIST problem (without/with permutation, denoted as MNIST
and pMNIST). Our baseline model is the shallow architecture proposed in [25]. To increase the
recurrent skip coefficient s, we add connections from time step t to time step t + k for some fixed
integer k, shown in Figure 2(a), right panel. By using this specific construction, the recurrent skip
coefficient increases from 1 (i.e., baseline) to k and the new model with extra connection has 2 hidden
matrices (one from t to t + 1 and the other from t to t + k).
For the adding problem, we follow the same setting as in [24]. We evaluate the baseline LSTM
with 128 hidden units and an LSTM with s = 30 and 90 hidden units (roughly the same number of
parameters as the baseline). The results are quite encouraging: as suggested in [24] baseline LSTM
works well for input sequence lengths T = 100, 200, 400 but fails when T = 750. On the other hand,
we observe that the LSTM with s = 30 learns perfectly when T = 750, and even if we increase T to
1000, LSTM with s = 30 still works well and the loss reaches to zero.
For the copying memory problem, we use a single layer RNN with 724 hidden units as our basic model,
and 512 hidden units with skip connections. So they have roughly the same number of parameters.
Models with a higher recurrent skip coefficient outperform those without skip connections by a large
margin. When T = 200, test set cross entropy (CE) of a basic model only yields 0.2409, but with
s = 40 it is able to reach a test set cross entropy of 0.0975. When T = 300, a model with s = 30
yields a test set CE of 0.1328, while its baseline could only reach 0.2025. We varied the sequence
length (T ) and recurrent skip coefficient (s) in a wide range (where T varies from 100 up to 300, and
s from 10 up to 50), and found that this kind of improvement persists.
For the sequential MNIST problem, the hidden-layer size of the baseline model is set to 90 and
models with s > 1 have hidden-layer sizes of 64. The results in Table 2, top-left panel, show that
tanh RNNs with recurrent skip coefficient s larger than 1 could improve the model performance
dramatically. Within a reasonable range of s, test accuracy increases quickly as s becomes larger.
We note that our model is the first tanh RNN model that achieves good performance on this task,
even improving upon the method proposed in [25]. In addition, we also formally compare with
the previous results reported in [25, 24], where our model (referred to as stanh) has a hidden-layer
size of 95, which is about the same number of parameters as in the tanh model of [24]. Table 2,
bottom-left panel, shows that our simple architecture improves upon the uRNN by 2.6% on pMNIST,
7
stanh s = 1
MNIST 34.9
s=1
pMNIST 49.8
Model
iRNN[25]
uRNN[24]
LSTM[24]
RNN(tanh)[25]
stanh(s = 21, 11)
s=5
46.9
s=3
79.1
s = 9 s = 13 s = 21
74.9 85.4 87.8
s=5 s=7 s=9
84.3 88.9 88.0
MNIST
97.0
95.1
98.2
?35.0
98.1
pMNIST
?82.0
91.4
88.0
?35.0
94.0
LSTM s = 1
MNIST 56.2
s=1
pMNIST 28.5
Architecture, s
MNIST k = 17
k = 21
pMNIST k = 5
k=9
s=3
87.2
s=3
25.0
s=5
86.4
s=4
60.8
s=7
86.4
s=5
62.2
(1), 1 (2), 1 (3), k2
39.5 39.4 54.2
39.5 39.9 69.6
55.5 66.6 74.7
55.5 71.1 78.6
s=9
84.8
s=6
65.9
(4), k
77.8
71.8
81.2
86.9
Table 2: Results for MNIST/pMNIST. Top-left: Test accuracies with different s for tanh RNN. Top-right:
Test accuracies with different s for LSTM. Bottom-left: Compared to previous results. Bottom-right: Test
accuracies for architectures (1), (2), (3) and (4) for tanh RNN.
and achieves almost the same performance as LSTM on the MNIST dataset with only 25% number of
parameters [24]. Note that obtaining good performance on sequential MNIST requires a larger s than
that for pMNIST (see Appendix B.4 for more details). LSTMs also showed performance boost and
much faster convergence speed when using larger s, as displayed in Table 2, top-right panel. LSTM
with s = 3 already performs quite well and increasing s did not result in any significant improvement,
while in pMNIST, the performance gradually improves as s increases from 4 to 6. We also observed
that the LSTM network performed worse on permuted MNIST compared to a tanh RNN. Similar
result was also reported in [25].
4.5
Recurrent Skip Coefficients vs. Skip Connections
We also investigated whether the recurrent skip coefficient can suggest something more than simply
adding skip connections. We design 4 specific architectures shown in Figure 2(b), right panel. (1)
is the baseline model with a 2-layer stacked architecture, while the other three models add extra
skip connections in different ways. Note that these extra skip connections all cross the same time
length k. In particular, (2) and (3) share quite similar architectures. However, ways in which the skip
connections are allocated makes big differences on their recurrent skip coefficients: (2) has s = 1, (3)
has s = k2 and (4) has s = k. Therefore, even though (2), (3) and (4) all add extra skip connections,
the fact that their recurrent skip coefficients are different might result in different performance.
We evaluated these architectures on the sequential MNIST and pMNIST datasets. The results show
that differences in s indeed cause big performance gaps regardless of the fact that they all have skip
connections (see Table 2, bottom-right panel). Given the same k, the model with a larger s performs
better. In particular, model (3) is better than model (2) even though they only differ in the direction of
the skip connections. It is interesting to see that for MNIST (unpermuted), the extra skip connection
in model (2) (which does not really increase the recurrent skip coefficient) brings almost no benefits,
as model (2) and model (1) have almost the same results. This observation highlights the following
point: when addressing the long term dependency problems using skip connections, instead of only
considering the time intervals crossed by the skip connection, one should also consider the model?s
recurrent skip coefficient, which can serve as a guide for introducing more powerful skip connections.
5
Conclusion
In this paper, we first introduced a general formulation of RNN architectures, which provides a solid
framework for the architectural complexity analysis. We then proposed three architectural complexity
measures: recurrent depth, feedforward depth, and recurrent skip coefficients capturing both short
term and long term properties of RNNs. We also found empirical evidences that increasing recurrent
depth and feedforward depth might yield performance improvements, increasing feedforward depth
might not help on long term dependency tasks, while increasing the recurrent skip coefficient can
largely improve performance on long term dependency tasks. These measures and results can provide
guidance for the design of new recurrent architectures for particular learning tasks.
Acknowledgments
The authors acknowledge the following agencies for funding and support: NSERC, Canada Research
Chairs, CIFAR, Calcul Quebec, Compute Canada, Samsung, ONR Grant N000141310721, ONR
Grant N000141512791 and IARPA Raytheon BBN Contract No. D11PC20071. The authors thank
the developers of Theano [28] and Keras [29], and also thank Nicolas Ballas, Tim Cooijmans, Ryan
Lowe, Mohammad Pezeshki, Roger Grosse and Alex Schwing for their insightful comments.
8
References
[1] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning
to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[3] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In
Advances in neural information processing systems, pages 3104?3112, 2014.
[4] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015.
[5] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and
Sanja Fidler. Skip-thought vectors. In NIPS, 2015.
[6] James Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. In
Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1033?1040,
2011.
[7] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural
networks. In Proceedings of The 30th International Conference on Machine Learning, pages 1310?1318,
2013.
[8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universit?t
M?nchen, 1991.
[9] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[10] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[11] Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn?k, Bas R Steunebrink, and J?rgen Schmidhuber. Lstm:
A search space odyssey. arXiv preprint arXiv:1503.04069, 2015.
[12] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. arXiv preprint arXiv:1406.1078, 2014.
[13] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network
architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15),
pages 2342?2350, 2015.
[14] J?rgen Schmidhuber. Learning complex, extended sequences using the principle of history compression.
Neural Computation, 4(2):234?242, 1992.
[15] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In
Advances in Neural Information Processing Systems, pages 493?499, 1996.
[16] Tapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in
perceptrons. In International Conference on Artificial Intelligence and Statistics, pages 924?932, 2012.
[17] Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks. In
Advances in Neural Information Processing Systems, pages 190?198, 2013.
[18] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent
neural networks. arXiv preprint arXiv:1312.6026, 2013.
[19] T. Lin, B. G. Horne, P. Tino, and C. L. Giles. Learning long-term dependencies is not as difficult with
NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329?1338, November
1996.
[20] Ilya Sutskever and Geoffrey Hinton. Temporal-kernel recurrent neural networks. Neural Networks,
23(2):239?243, 2010.
[21] Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. In Proceedings
of The 31st International Conference on Machine Learning, pages 1863?1871, 2014.
[22] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus
of english: The penn treebank. Computational linguistics, 19(2):313?330, 1993.
[23] Tom?? Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, and Stefan Kombrink. Subword language
modeling with neural networks. preprint, (http://www.fit.vutbr.cz/imikolov/rnnlm/char.pdf), 2012.
[24] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXiv
preprint arXiv:1511.06464, 2015.
[25] Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of
rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
[26] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[27] David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint
arXiv:1511.08400, 2015.
[28] The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr?d?ric Bastien, Justin Bayer, Anatoly Belikov, et al. Theano:
A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688,
2016.
[29] Fran?ois Chollet. Keras. GitHub repository: https://github.com/fchollet/keras, 2015.
9
| 6303 |@word mild:4 repository:1 version:1 nchen:1 compression:1 seems:1 nd:1 km:2 p0:2 paid:1 solid:1 recursively:1 carry:2 necessity:1 cyclic:11 series:1 contains:2 subword:1 outperforms:1 current:2 comparing:1 com:1 surprising:1 skipping:2 activation:2 diederik:1 amjad:1 must:1 plot:1 v:1 bart:1 intelligence:1 beginning:1 reciprocal:2 vanishing:3 short:3 colored:6 provides:2 pascanu:2 node:28 toronto:1 scientifiques:1 mathematical:1 along:4 constructed:2 prove:3 introduce:1 indeed:2 expected:1 roughly:3 elman:1 nor:1 kiros:1 behavior:1 salakhutdinov:2 td:11 unfolded:7 encouraging:1 considering:1 increasing:15 becomes:1 spain:1 begin:1 bounded:1 horne:1 panel:10 kind:2 interpreted:1 unimpeded:1 developer:1 transformation:13 guarantee:1 temporal:1 quantitative:2 unexplored:1 every:3 remember:1 growth:2 zaremba:1 universit:2 demonstrates:1 k2:2 mansimov:1 unit:9 grant:2 zhang1:1 penn:1 christof:1 positive:1 persists:1 local:2 limit:4 switching:1 analyzing:1 path:17 solely:1 might:6 rnns:33 emphasis:1 plus:1 studied:1 initialization:1 examined:1 suggests:5 range:2 directed:16 acknowledgment:1 lecun:1 razvan:2 digit:1 procedure:1 jan:2 unfold:1 rnn:68 empirical:2 thought:2 revealing:1 word:1 regular:1 suggest:2 get:2 cannot:1 put:1 applying:1 www:1 conventional:3 map:2 marten:1 clockwork:2 go:2 attention:2 starting:1 regardless:1 sepp:2 jimmy:1 tomas:1 stabilizing:1 notion:2 variation:1 hurt:1 merri:1 construction:2 netzen:1 heavily:1 alleviating:1 homogeneous:2 designing:1 jaitly:1 crossing:1 element:2 satisfying:1 particularly:1 textdata:1 bottom:6 role:1 observed:3 preprint:11 capture:8 calculate:1 cycle:6 ramus:1 benjamin:1 agency:1 complexity:13 angermueller:1 rigorously:3 dt2:1 neglected:1 imikolov:1 serve:2 upon:2 easily:3 samsung:1 schwenk:1 various:3 harri:1 pezeshki:1 alphabet:2 stacked:9 forced:1 fast:1 artificial:1 zemel:1 klaus:2 whose:1 saizheng:1 larger:8 quite:4 deoras:1 encoder:1 statistic:1 amar:1 think:1 reshaped:1 jointly:1 patrice:1 interplay:1 sequence:21 net:1 propose:1 fr:1 loop:2 rapidly:1 translate:1 achieve:2 description:1 intuitive:1 validate:1 vhid:2 sutskever:5 convergence:1 extending:1 generating:1 adam:2 converges:1 help:3 tim:1 recurrent:85 develop:1 fixing:2 eq:3 ois:1 skip:51 differ:1 direction:5 annotated:1 stochastic:1 vc:10 exploration:2 stabilization:1 char:1 memorized:1 odyssey:1 behaviour:1 beatrice:1 marcinkiewicz:1 really:1 koutn:1 proposition:5 ryan:2 extension:1 rfou:1 claim:1 pointing:1 rgen:3 achieves:5 adopt:1 smallest:1 early:1 torralba:1 ruslan:3 travel:1 faustino:1 label:1 yuhuai:1 tanh:19 tudes:1 repetition:1 odels:1 almahairi:1 weighted:3 unfolding:7 stefan:1 clearly:1 focus:2 longest:6 improvement:7 modelling:5 indicates:3 rigorous:1 baseline:9 sense:1 rupesh:1 stopping:1 el:1 i0:1 entire:1 typically:1 hidden:20 going:1 france:1 pmnist:10 iu:2 issue:4 among:1 classification:2 denoted:5 development:1 special:1 summed:1 initialize:1 field:1 equal:4 construct:1 frasconi:1 untersuchungen:1 identical:1 holger:1 look:1 unsupervised:1 icml:3 stanh:3 discrepancy:1 yoshua:8 richard:1 employ:1 montr:1 lin1:1 investigate:6 evaluation:1 bpc:6 truly:1 sh:8 behind:1 implication:2 beforehand:1 tuple:3 edge:16 neglecting:1 capable:2 bayer:1 institut:1 orthogonal:1 conduct:2 iv:2 walk:2 circle:1 initialized:1 guidance:1 theoretical:1 modeling:1 giles:1 juergen:1 phrase:1 introducing:1 addressing:1 entry:2 technische:1 uniform:2 usefulness:1 delay:2 too:3 characterize:2 reported:5 dependency:17 varies:1 periodic:1 koutnik:1 cho:3 st:6 grus:2 fundamental:2 lstm:18 international:5 bu:6 destination:1 systematic:1 zhouhan:1 contract:1 memisevic:1 anatoly:1 connecting:16 quickly:2 together:1 n000141512791:1 ilya:5 schrauwen:1 vastly:1 containing:1 rafal:1 dr:41 worse:1 simard:1 wojciech:1 nonlinearities:4 de:2 coefficient:30 caused:1 explicitly:1 depends:1 crossed:2 multiplicative:1 view:2 try:1 closed:2 lowe:1 analyze:2 observing:1 red:6 start:3 sup:1 vin:2 unidirectional:3 contribution:2 minimize:1 square:1 text8:6 accuracy:4 who:2 largely:2 yield:4 correspond:1 yellow:1 vout:2 researcher:2 rectified:1 enn:1 history:1 reach:3 definition:17 james:1 naturally:1 associated:1 di:10 attributed:1 sampled:1 dynamischen:1 dataset:10 treatment:1 mitchell:1 lim:2 improves:3 formalize:2 back:1 focusing:1 bidirectional:1 appears:1 higher:1 dt:5 multigraph:4 follow:3 tom:1 formulation:3 done:1 though:3 strongly:1 generality:1 evaluated:2 roger:1 hand:3 lstms:8 nonlinear:7 marker:1 lack:2 undergoes:2 brings:1 quality:1 reveal:1 irnn:1 mary:1 building:1 effect:4 concept:4 verify:3 mattmahoney:1 hence:1 kyunghyun:3 din:1 fidler:1 evolution:1 deal:1 adjacent:1 fethi:1 tino:1 self:1 covering:1 noted:1 coincides:2 prominent:1 pdf:1 evident:1 theoretic:3 demonstrate:1 mohammad:1 performs:2 bring:1 greff:2 image:2 funding:1 krueger:1 wikipedia:1 permuted:2 functional:1 performed:1 empirically:1 ballas:2 slight:1 salah:1 hihi:1 bougares:1 mellon:1 measurement:1 refer:1 significant:1 jozefowicz:1 dag:2 vanilla:1 grid:2 similarly:2 pointed:1 nonlinearity:5 language:3 sanja:1 add:3 align:1 something:2 recent:1 showed:1 perspective:4 forcing:1 perplexity:1 schmidhuber:4 outperforming:1 success:1 onr:2 seen:2 arjovsky:1 greater:1 tapani:1 shortest:4 period:5 exploding:3 multiple:7 full:1 faster:1 offer:1 long:22 cifar:2 cross:4 michiel:1 lin:1 roland:2 equally:1 prediction:2 variant:1 basic:2 supi:1 metric:1 df:33 navdeep:1 arxiv:20 represent:1 kernel:1 cz:1 achieved:1 hochreiter:2 addition:2 separately:1 interval:1 source:1 allocated:1 extra:8 rest:1 unlike:1 comment:1 bahdanau:3 quebec:1 flow:4 mod:1 integer:4 call:5 unitary:1 kera:3 feedforward:25 intermediate:1 easy:1 bengio:7 fit:2 architecture:43 perfectly:1 t0:2 whether:2 expression:1 urnn:2 granted:1 speaking:1 cause:1 hessian:1 deep:7 dramatically:1 generally:1 antonio:1 clear:1 repeating:1 category:1 generate:1 http:3 outperform:2 per:5 blue:2 mrnn:1 vun:6 carnegie:1 paolo:1 four:2 changing:1 neither:1 ce:2 computability:4 graph:34 excludes:1 unpermuted:1 chollet:1 sum:4 run:4 powerful:1 raquel:1 almost:3 reader:1 reasonable:1 architectural:8 yann:1 fran:1 appendix:5 ric:1 comparable:1 bit:1 capturing:2 layer:19 bound:4 pay:1 gomez:1 display:1 correspondence:1 infinity:1 alex:2 explode:1 aspect:2 speed:1 nitish:1 min:1 chair:1 kumar:1 attempting:1 mikolov:2 nboer:1 martin:1 across:4 describes:2 smaller:2 character:5 son:1 shallow:4 modification:1 quoc:2 intuitively:1 gradually:1 theano:3 taken:1 remains:1 assures:1 describing:1 eventually:1 nonempty:1 eun:5 count:1 end:1 gulcehre:2 observe:3 hierarchical:2 batch:1 shah:1 gate:1 existence:3 denotes:3 top:6 include:1 remaining:1 ensure:1 graphical:1 log2:1 linguistics:1 narx:1 prof:1 build:1 question:2 quantity:4 already:1 traditional:2 hai:1 gradient:5 detrimental:1 thank:2 mapped:1 valpola:1 decoder:1 gun:24 argue:1 trivial:2 urtasun:1 dzmitry:3 marcus:1 besides:1 length:16 index:3 modeled:1 illustration:2 copying:5 difficult:3 mostly:2 potentially:1 ba:2 design:4 diamond:1 allowing:1 upper:3 observation:1 n000141310721:1 datasets:2 finite:4 acknowledge:1 november:1 behave:1 displayed:1 descent:1 immediate:1 caglar:2 extended:1 ever:1 hinton:2 santorini:1 frame:1 gc:35 dc:1 varied:1 team:1 canada:2 introduced:3 david:1 connection:27 fv:4 distinction:1 boost:2 barcelona:1 nip:2 kingma:1 address:1 able:1 suggested:1 justin:1 usually:1 pattern:1 challenge:1 herman:1 including:1 max:2 memory:6 video:1 shifting:1 natural:2 difficulty:2 attach:1 turning:1 zhu:1 scheme:1 improve:6 github:2 raiko:1 conventionally:1 text:1 understanding:1 calcul:1 python:1 asymptotic:1 graf:1 loss:2 fully:1 highlight:3 permutation:1 diploma:1 interesting:2 rnnlm:1 acyclic:1 geoffrey:2 validation:1 vutbr:1 sufficient:1 propagates:2 principle:2 neuronalen:1 treebank:1 systematically:1 share:3 translation:2 kombrink:1 fchollet:1 last:2 copy:1 free:1 english:1 alain:1 bias:1 guide:1 vv:1 wide:1 barrier:3 benefit:2 regard:1 van:1 depth:62 dimension:1 transition:7 avoids:1 stand:1 computes:1 forward:2 commonly:1 author:2 made:1 ec:7 transaction:2 contradict:1 keep:1 supremum:1 dealing:1 incoming:2 reveals:1 corpus:1 cooijmans:1 search:3 table:8 promising:1 nature:2 reasonably:1 nicolas:2 obtaining:1 improving:1 steunebrink:1 permute:1 investigated:1 complex:1 constructing:1 did:3 main:1 timescales:1 big:2 iarpa:1 repeated:1 referred:1 mila:1 grosse:1 tong:1 aid:1 fails:1 position:1 lie:1 vanish:1 learns:1 theorem:6 down:1 specific:8 zu:1 bastien:1 insightful:1 recurrently:1 explored:1 evidence:2 exists:3 mnist:18 sequential:9 adding:10 gained:1 bbn:1 margin:1 gap:1 easier:1 entropy:3 forget:1 simply:4 explore:1 vinyals:1 nserc:1 corresponds:3 yukun:1 viewed:3 goal:1 bpcs:2 quantifying:1 ann:1 twofold:1 labelled:1 analysing:2 change:1 folded:1 infinite:2 except:3 specifically:1 fnns:1 raytheon:1 lemma:1 schwing:1 called:3 definedness:1 pas:2 total:1 experimental:4 perceptrons:1 hautes:1 indicating:1 formally:1 guillaume:1 support:1 anoop:1 oriol:1 evaluate:5 regularizing:1 srivastava:2 |
5,863 | 6,304 | Convolutional Neural Fabrics
Shreyas Saxena
Jakob Verbeek
INRIA Grenoble ? Laboratoire Jean Kuntzmann
Abstract
Despite the success of CNNs, selecting the optimal architecture for a given task
remains an open problem. Instead of aiming to select a single optimal architecture,
we propose a ?fabric? that embeds an exponentially large number of architectures.
The fabric consists of a 3D trellis that connects response maps at different layers,
scales, and channels with a sparse homogeneous local connectivity pattern. The
only hyper-parameters of a fabric are the number of channels and layers. While
individual architectures can be recovered as paths, the fabric can in addition
ensemble all embedded architectures together, sharing their weights where their
paths overlap. Parameters can be learned using standard methods based on backpropagation, at a cost that scales linearly in the fabric size. We present benchmark
results competitive with the state of the art for image classification on MNIST and
CIFAR10, and for semantic segmentation on the Part Labels dataset.
1
Introduction
Convolutional neural networks (CNNs) [15] have proven extremely successful for a wide range
of computer vision problems and other applications. In particular, the results of Krizhevsky et
al . [13] have caused a major paradigm shift in computer vision from models relying in part on
hand-crafted features, to end-to-end trainable systems from the pixels upwards. One of the main
problems that holds back further progress using CNNs, as well as deconvolutional variants [24, 26]
used for semantic segmentation, is the lack of efficient systematic ways to explore the discrete and
exponentially large architecture space. To appreciate the number of possible architectures, consider a
standard chain-structured CNN architecture for image classification. The architecture is determined
by the following hyper-parameters: (i) number of layers, (ii) number of channels per layer, (iii) filter
size per layer, (iv) stride per layer, (v) number of pooling vs. convolutional layers, (vi) type of pooling
operator per layer, (vii) size of the pooling regions, (viii) ordering of pooling and convolutional layers,
(ix) channel connectivity pattern between layers, and (x) type of activation, e.g. ReLU or MaxOut, per
layer. The number of resulting architectures clearly does not allow for (near) exhaustive exploration.
We show that all network architectures that can be obtained for various choices of the above ten
hyper-parameters are embedded in a ?fabric? of convolution and pooling operators. Concretely,
the fabric is a three-dimensional trellis of response maps of various resolutions, with only local
connections across neighboring layers, scales, and channels. See Figure 1 for a schematic illustration
of how fabrics embed different architectures. Each activation in a fabric is computed as a linear
function followed by a non-linearity from a multi-dimensional neighborhood (spatial/temporal input
dimensions, a scale dimension and a channel dimension) in the previous layer. Setting the only two
hyper-parameters, number of layers and channels, is not ciritical as long as they are large enough. We
also consider two variants, one in which the channels are fully connected instead of sparsely, and
another in which the number of channels doubles if we move to a coarser scale. The latter allows for
one to two orders of magnitude more channels, while increasing memory requirements by only 50%.
All chain-structured network architectures embedded in the fabric can be recovered by appropriately
setting certain connections to zero, so that only a single processing path is active between input and
output. General, non-path, weight settings correspond to ensembling many architectures together,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Layers
Scales
Input
Output
Figure 1: Fabrics embedding two seven-layer CNNs (red, green) and a ten-layer deconvolutional
network (blue). Feature map size of the CNN layers are given by height. Fabric nodes receiving input
and producing output are encircled. All edges are oriented to the right, down in the first layer, and
towards the output in the last layer. The channel dimension of the 3D fabric is omitted for clarity.
which share parameters where the paths overlap. The acyclic trellis structure allows for learning
using standard error back-propagation methods. Learning can thus efficiently configure the fabric to
implement each one of exponentially many embedded architectures, as well as ensembles of them.
Experimental results competitive with the state of the art validate the effectiveness of our approach.
The contributions of our work are: (1) Fabrics allow by and large to sidestep the CNN model
architecture selection problem. Avoiding explicitly training and evaluating individual architectures
using, e.g., local-search strategies [2]. (2) While scaling linearly in terms of computation and memory
requirements, our approach leverages exponentially many chain-structured architectures in parallel
by massively sharing weights among them. (3) Since our fabric is multi-scale by construction, it
can naturally generate output at multiple resolutions, e.g. for image classification and semantic
segmentation or multi-scale object detection, within a single non-branching network structure.
2
Related work
Several chain-structured CNN architectures, including Alex-net [13] and the VGG-16 and VGG-19
networks [27], are widely used for image classification and related tasks. Although very effective, it is
not clear that these architectures are the best ones given their computational and memory requirements.
Their widespread adoption is in large part due to the lack of more effective methods to find good
architectures than trying them one-by-one, possibly initializing parameters from related ones [2].
CNN architectures for semantic segmentation, as well as other structured prediction tasks such
as human pose estimation [25], are often derived from ones developed for image classification,
see e.g. [20, 24, 31, 33]. Up-sampling operators are used to increase the resolution of the output,
compensating for pooling operators used in earlier layers of the network [24]. Ronneberger et al .
[26] present a network with additional links that couple layers with the same resolution near the input
and output. Other architectures, see e.g. [3, 7], process the input in parallel across several resolutions,
and then fuse all streams by re-sampling to the output resolution. Such architectures induce networks
with multiple parallel paths from input to output. We will show that nearly all such networks are
embedded in our fabrics, either as paths or other simple sub-graphs.
While multi-dimensional networks have been proposed in the past, e.g. to process non-sequential
data with recurrent nets [5, 11], to the best of our knowledge they have not been explored as a
?basis? to span large classes of convolutional neural networks. Misra et al . [23] propose related
cross-stitch networks that exchange information across corresponding layers of two copies of the
same architecture that produces two different outputs. Their approach is based on Alex-net [13],
and does not address the network architecture selection problem. In related work Zhou et al . [34]
interlink CNNs that take input from re-scaled versions of the input image. The structure of their
network is related to our fabric, but lacks a sparse connectivity pattern across channels. They
consider their networks for semantic segmentation, and set the filter sizes per node manually, and
2
use strided max-pooling for down-sampling and nearest neighbor interpolation for up-sampling. The
contribution of our work is to show that a similar network structure suffice to span a vast class of
network architectures for both dense prediction and classification tasks.
Springenberg et al . [29] experimentally observed that the use of max-pooling in CNN architectures is
not always beneficial as opposed to using strided convolutions. In our work we go one step further
and show that ReLU units and strided convolutions suffice to implement max-pooling operators in
our fabrics. Their work, similar to ours, also strives to simplify architecture design. Our results,
however, reach much further than only removing pooling operators from the architectural elements.
Lee et al . [17] generalize the max and average pooling operators by computing both max and average
pooling, and then fusing the result in a possibly data-driven manner. Our fabrics also generalize max
and average pooling, but instead of adding elementary operators, we show that settings weights in a
network with fewer elementary operators is enough for this generalization.
Kulkarni et al . [14] use `1 regularization to automatically select the number of units in ?fullyconnected? layers of CNN architectures for classification. Although their approach does not directly
extend to determine more general architectural design choices, it might be possible to use such
regularization techniques to select the number of channels and/or layers of our fabrics.
Dropout [30] and swapout [28] are stochastic training methods related to our work. They can
be understood as approximately averaging over an exponential number of variations of a given
architecture. Our approach, on the other hand, allows to leverage an exponentially large class of
architectures (ordering of pooling and convolutional layers, type of pooling operator, etc.) by means
of continuous optimization. Note that these approaches are orthogonal and can be applied to fabrics.
3
The fabric of convolutional neural networks
In this section we give a precise definition of convolutional neural fabrics, and show in Section 3.2
that most architectural network design choices become irrelevant for sufficiently large fabrics. Finally,
we analyze the number of response maps, parameters, and activations of fabrics in Section 3.3.
3.1
Weaving the convolutional neural fabric
Each node in the fabric represents one response map with the same dimension D as the input signal
(D = 1 for audio, D = 2 for images, D = 3 for video). The fabric over the nodes is spanned
by three axes. A layer axis along which all edges advance, which rules out any cycles, and which
is analogous to the depth axis of a CNN. A scale axis along which response maps of different
resolutions are organized from fine to coarse, neighboring resolutions are separated by a factor two. A
channel axis along which different response maps of the same scale and layer are organized. We use
S = 1 + log2 N scales when we process inputs of size N D , e.g. for 32?32 images we use six scales,
so as to obtain a scale pyramid from the full input resolution to the coarsest 1?1 response maps.
We now define a sparse and homogeneous edge structure. Each node is connected to a 3?3 scale?
channel neighborhood in the previous layer, i.e. activations at channel c, scale s, and layer l are
P
ij
computed as a(s, c, l) = i,j?{?1,0,1} conv a(c + i, s + j, l ? 1), wscl
. Input from a finer scale
is obtained via strided convolution, and input from a coarser scale by convolution after upsampling
by padding zeros around the activations at the coarser level. All convolutions use kernel size 3.
Activations are thus a linear function over multi-dimensional neighborhoods, i.e. a four dimensional
3?3?3?3 neighborhood when processing 2D images. The propagation is, however, only convolutional
across the input dimensions, and not across the scale and layer axes. The ?fully connected? layers
of a CNN correspond to nodes along the coarsest 1?1 scale of the fabric. Rectified linear units
(ReLUs) are used at all nodes. Figure 1 illustrates the connectivity pattern in 2D, omitting the channel
dimension for clarity. The supplementary material contains an illustration of the 3D fabric structure.
All channels in the first layer at the input resolution are connected to all channels of the input
signal. The first layer contains additional edges to distribute the signal across coarser scales, see
the vertical edges in Figure 1. More precisely, within the first layer, channel c at scale s receives
input from channels c + {?1, 0, 1} from scale s ? 1. Similarly, edges within the last layer collect
the signal towards the output. Note that these additional edges do not create any cycles, and that the
edge-structure within the first and last layer is reminiscent of the 2D trellis in Figure 1.
3
3.2
Stitching convolutional neural networks on the fabric
We now demonstrate how various architectural choices can be ?implemented? in fabrics, demonstrating they subsume an exponentially large class of network architectures. Learning will configure a
fabric to behave as one architecture or another, but more generally as an ensemble of many of them.
For all but the last of the following paragraphs, it is sufficient to consider a 2D trellis, as in Figure 1,
where each node contains the response maps of C channels with dense connectivity among channels.
Re-sampling operators. A variety of re-sampling operators is available in fabrics, here we discuss
ones with small receptive fields, larger ones are obtained by repetition. Stride-two convolutions are
used in fabrics on fine-to-coarse edges, larger strides are obtained by repetition. Average pooling
is obtained in fabrics by striding a uniform filter. Coarse-to-fine edges in fabrics up-sample by
padding zeros around the coarse activations and then applying convolution. For factor-2 bilinear
interpolation we use a filter that has 1 in the center, 1/4 on corners, and 1/2 elsewhere. Nearest
neighbor interpolation is obtained using a filter that is 1 in the four top-left entries and zero elsewhere.
For max-pooling over a 2 ? 2 region, let a and b represent the values of two vertically neighboring
pixels. Use one layer and three channels to compute (a + b)/2, (a ? b)/2, and (b ? a)/2. After
ReLU, a second layer can compute the sum of the three terms, which equals max(a, b). Each pixel
now contains the maximum of its value and that of its vertical neighbor. Repeating the same in the
horizontal direction, and sub-sampling by a factor two, gives the output of 2?2 max-pooling. The
same process can also be used to show that a network of MaxOut units [4] can be implemented in a
network of ReLU units. Although ReLU and MaxOut are thus equivalent in terms of the functions
they can implement, for training efficiency it may be more advantageous to use MaxOut networks.
Filter sizes. To implement a 5 ? 5 filter we first compute nine intermediate channels to obtain a
vectorized version of the 3?3 neighborhood at each pixel, using filters that contain a single 1, and are
zero elsewhere. A second 3?3 convolution can then aggregate values across the original 5?5 patch,
and output the desired convolution. Any 5?5 filter can be implemented exactly in this way, not only
approximated by factorization, c.f . [27]. Repetition allows to obtain filters of any desired size.
Ordering convolution and re-sampling. As shown in Figure 1, chain-structured networks correspond to paths in our fabrics. If weights on edges outside a path are set to zero, a chain-structured
network with a particular sequencing of convolutions and re-sampling
operators is obtained. A trellis
that spans S + 1 scales and L + 1 layers contains more than L
chain-structured
CNNs, since this
S
corresponds to the number of ways to spread S sub-sampling operators across the L steps to go from
the first to the last layer. More CNNs are embedded, e.g. by exploiting edges within the first and
last layer, or by including intermediate up-sampling operators. Networks beyond chain-structured
ones, see e.g. [3, 20, 26], are also embedded in the trellis, by activating a larger subset of edges than
a single path, e.g. a tree structure for the multi-scale net of [3].
Channel connectivity pattern. Although most networks in the literature use dense connectivity
across channels between successive layers, this is not a necessity. Krizhevsky et al . [13], for example,
use a network that is partially split across two independent processing streams.
In Figure 2 we demonstrate that a fabric which is sparsely connected along the channel axis, suffices
to emulate densely connected convolutional layers. This is achieved by copying channels, convolving
them, and then locally aggregating them. Both the copy and sum process are based on local channel
interactions and convolutions with filters that are either entirely zero, or identity filters which are
all zero except for a single 1 in the center. While more efficient constructions exist to represent the
densely connected layer in our trellis, the one presented here is simple to understand and suffices to
demonstrate feasibility. Note that in practice learning automatically configures the trellis.
Both the copy and sum process generally require more than one layer to execute. In the copying process, intermediate ReLUs do not affect the result since the copied values themselves are non-negative
outputs of ReLUs. In the convolve-and-sum process care has to be taken since one convolution might
give negative outputs, even if the sum of convolutions is positive. To handle this correctly, it suffices
to shift the activations by subtracting from the bias of every convolution i the minimum possible
corresponding output amin
(which always exists for a bounded input domain). Using the adjusted
i
bias, the output of the convolution is now guaranteed to be non-negative, and to propagate properly
in
copy and sum process. In the last step of summing the convolved channels, we can add back
Pthemin
to shift the activations back to recover the desired sum of convolved channels.
i ai
4
Channels
Layers
a
b
c
d
e
a
a
b
c
d
e
a
b
a
c
d
e
a
b
c
a
d
e
a
b
c
d
a
e
a
b
c
d
e
a
a
b
b
c
d
e
a
a
b
c
b
d
e
a
a
b
c
d
b
e
a
a
b
c
d
e
b
a
...
...
...
a
b
c
d
e
e
d
c
b
a
.
.
.
...
...
??
c+d+e
??
a+b
??
??
??
a+b+c+d+e
??
??
...
...
Figure 2: Representation of a dense-channel-connect layer in a fabric with sparse channel connections
using copy and swap operations. The five input channels a, . . . , e are first copied; more copies are
generated by repetition. Channels are then convolved and locally aggregated in the last two layers to
compute the desired output. Channels in rows, layers in columns, scales are ignored for simplicity.
Table 1: Analysis of fabrics with L layers, S scales, C channels. Number of activations given for
D = 2 dim. inputs of size N ?N pixels. Channel doubling across scales used in the bottom row.
# chan. / scale
constant
doubling
3.3
# resp. maps
C ?L?S
C ? L ? 2S
# parameters (sparse)
C ? L ? 3D+1 ? 3 ? S
C ? L ? 3D+1 ? 3 ? 2S
# parameters (dense)
C ? L ? 3D+1 ? C ? S
C ? L ? 3D+1 ? C ? 4S ?
7
18
# activations
C ? L ? N 2 ? 34
C ? L ? N2 ? 2
Analysis of the number of parameters and activations
For our analysis we ignore border effects, and consider every node to be an internal one. In the top
row of Table 1 we state the total number of response maps throughout the fabric, and the number of
parameters when channels are sparsely or densely connected. We also state the number of activations,
which determines the memory usage of back-propagation during learning.
While embedding an exponential number of architectures in the number of layers L and channels C,
the number of activations and thus the memory cost during learning grows only linearly in C and L.
Since each scale reduces the number of elements by a factor 2D , the total number of elements across
scales is bounded by 2D /(2D ? 1) times the number of elements N D at the input resolution.
The number of parameters is linear in the number of layers L, and number of scales S. For sparsely
connected channels, the number of parameters grows also linearly with the number of channels C ,
while it grows quadratically with C in case of dense connectivity.
As an example, the largest models we trained for 32?32 input have L = 16 layers and C = 256
channels, resulting in 2M parameters (170M for dense), and 6M activations. For 256?256 input we
used upto L = 16 layers and C = 64 channels, resulting in 0.7 M parameters (16M for dense), and
89M activations. For reference, the VGG-19 model has 144M parameters and 14M activations.
Channel-doubling fabrics. Doubling the number of channels when moving to coarser scales is
used in many well-known architectures, see e.g. [26, 27]. In the second row of Table 1 we analyze
fabrics with channel-doubling instead of a constant number of channels per scale. This results in C2S
channels throughout the scale pyramid in each layer, instead of CS when using a constant number of
channels per scale, where we use C to denote the number of ?base channels? at the finest resolution.
For 32?32 input images the total number of channels is roughly 11? larger, while for 256?256
images we get roughly 57? more channels. The last column of Table 1 shows that the number of
activations, however, grows only by 50% due to the coarsening of the maps.
With dense channel connections and 2D data, the amount of computation per node is constant, as at a
coarser resolution there are 4? less activations, but interactions among 2?2 more channels. Therefore,
in such fabrics the amount of computation grows linearly in the number of scales as compared to
a single embedded CNN. For sparse channel connections, we adapt the local connectivity pattern
between nodes to accommodate for the varying number channels per scale, see Figure 3 for an
illustration. Each node still connects to nine other nodes at the previous layer: two inputs from scale
s ? 1, three from scale s, and four from scale s + 1. The computational cost thus also grows only
5
Scales
Channels
Figure 3: Diagram of sparse channel connectivity from
one layer to another in a channel-doubling fabric. Channels are laid out horizontally and scales vertically. Each
internal node, i.e. response map, is connected to nine
nodes at the previous layer: four channels at a coarser
resolution, two at a finer resolution, and to itself and
neighboring channels at the same resolution.
by 50% as compared to using a constant number of channels per scale. In this case, the number of
parameters grows by the same factor 2S /S as the number of channels. In case of dense connections,
7 S
4 /S. That is, roughly a factor 265 for
however, the number of parameters explodes with a factor 18
32?32 input, and 11,327 for 256?256 input. Therefore, channel-doubling fabrics appear most useful
with sparse channel connectivity. Experiments with channel-doubling fabrics are left for future work.
4
Experimental evaluation results
In this section we first present the datasets used in our experiments, followed by evaluation results.
4.1 Datasets and experimental protocol
Part Labels dataset. This dataset [10] consists of 2,927 face images from the LFW dataset
[8], with pixel-level annotations into the classes hair, skin, and background. We use the standard
evaluation protocol which specifies training, validation and test sets of 1,500, 500 and 927 images,
respectively. We report accuracy at pixel-level and superpixel-level. For superpixel we average the
class probabilities over the contained pixels. We used horizontal flipping for data augmentation.
MNIST. This dataset [16] consists of 28?28 pixel images of the handwritten digits 0, . . . , 9. We
use the standard split of the dataset into 50k training samples, 10k validation samples and 10k test
samples. Pixel values are normalized to [0, 1] by dividing them by 255. We augment the train data by
randomly positioning the original image on a 32?32 pixel canvas.
CIFAR10. The CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) consists of 50k 32?32 training images and 10k testing images in 10 classes. We hold out 5k training
images as validation set, and use the remaining 45k as the training set. To augment the data, we
follow common practice, see e.g. [4, 18], and pad the images with zeros to a 40?40 image and then
take a random 32?32 crop, in addition we add horizontally flipped versions of these images.
Training. We train our fabrics using SGD with momentum of 0.9. After each node in the trellis we
apply batch normalization [9], and regularize the model with weight decay of 10?4 , but did not apply
dropout [30]. We use the validation set to determine the optimal number of training epochs, and
then train a final model from the train and validation data and report performance on the test set. We
release our Caffe-based implementation at http://thoth.inrialpes.fr/~verbeek/fabrics.
4.2 Experimental results
For all three datasets we trained sparse and dense fabrics with various numbers of channels and layers.
In all cases we used a constant number of channels per scale. The results across all these settings can
be found in the supplementary material, here we report only the best results from these. On all three
datasets, larger trellises perform comparable or better than smaller ones. So in practice the choice of
these only two hyper-parameters of our model is not critical, as long as a large enough trellis is used.
Part Labels. On this data set we obtained a super-pixel accuracy of 95.6% using both sparse
and dense trellises. In Figure 4 we show two examples of predicted segmentation maps. Table 2
compares our results with the state of the art, both in terms of accuracy and the number of parameters.
Our results are slightly worse than [31, 33], but the latter are based on the VGG-16 network. That
network has roughly 4, 000? more parameters than our sparse trellis, and has been trained from
over 1M ImageNet images. We trained our model from scratch using only 2,000 images. Moreover,
[10, 19, 31] also include CRF and/or RBM models to encode spatial shape priors. In contrast, our
results with convolutional neural fabrics (CNF) are obtained by predicting all pixels independently.
6
Figure 4: Examples form the Part Labels test set: input image (left), ground-truth labels (middle),
and superpixel-level labels from our sparse CNF model with 8 layers and 16 channels (right).
Table 2: Comparison of our results with the state of the art on Part Labels.
Tsogkas et al . [31]
Zheng et al . [33]
Liu et al . [19]
Kae et al . [10]
Year
# Params.
SP Acccur.
P Accur.
2015
2015
2015
2013
>414 M
>138 M
>33 M
0.7 M
96.97
96.59
?
94.95
?
?
95.24
?
0.1 M
8.0 M
95.58
95.63
94.60
94.82
Ours: CNF-sparse (L = 8, C = 16)
Ours: CNF-dense (L = 8, C = 64)
MNIST. We obtain error rates of 0.48% and 0.33% with sparse and dense fabrics respectively. In
Table 3 we compare our results to a selection of recent state-of-the-art work. We excluded several
more accurate results reported in the literature, since they are based on significantly more elaborate
data augmentation methods. Our result with a densely connected fabric is comparable to those of
[32], which use similar data augmentation. Our sparse model, which has 20? less parameters than
the dense variant, yields an error of 0.48% which is slightly higher.
CIFAR10. In Table 4 we compare our results to the state of the art. Our error rate of 7.43% with a
dense fabric is comparable to that reported with MaxOut networks [4]. On this dataset the error of
the sparse model, 18.89%, is significantly worse than the dense model. This is either due to a lack of
capacity in the sparse model, or due to difficulties in optimization. The best error of 5.84% [22] was
obtained using residual connections, without residual connections they report an error of 6.06%.
Visualization. In Figure 5 we visualize the connection strengths of learned fabrics with dense
channel connectivity. We observe qualitative differences between learned fabrics. The semantic segmentation model (left) immediately distributes the signal across the scale pyramid (first layer/column),
and then progressively aggregates the multi-scale signal towards the output. In the CIFAR10 classification model the signal is progressively downsampled, exploiting multiple scales in each layer. The
figure shows the result of heuristically pruning (by thresholding) the weakest connections to find a
smaller sub-network with good performance. We pruned 67% of the connections while increasing
the error only from 7.4% to 8.1% after fine-tuning the fabric with the remaining connections. Notice
that all up-sampling connections are deactivated after pruning.
Table 3: Comparison of our results with the state of the art on MNIST. Data augmentation with
translation and flipping is denoted by T and F respectively, N denotes no data augmentation.
Chang et al . [1]
Lee et al . [17]
Wan et al . (Dropconnect) [32]
CKN [21]
Goodfellow et al . (MaxOut) [4]
Lin et al . (Network in Network) [18]
Year
Augmentation
# Params.
Error (%)
2015
2015
2013
2014
2013
2013
N
N
T
N
N
N
447K
0.24
0.31
0.32
0.39
0.45
0.47
Ours: CNF-sparse (L = 16, C = 32)
Ours: CNF-dense (L = 8, C = 64)
T
T
7
379K
43 K
420 K
249 K
5.3 M
0.48
0.33
Table 4: Comparison of our results with the state of the art on CIFAR10. Data augmentation with
translation, flipping, scaling and rotation are denoted by T, F, S and R respectively.
Mishkin & Matas [22]
Lee et al . [17]
Chang et al . [1]
Springenberg et al . (All Convolutional Net) [29]
Lin et al . (Network in Network) [18]
Wan et al . (Dropconnect) [32]
Goodfellow et al . (MaxOut) [4]
Ours: CNF-sparse (L = 16, C = 64)
Ours: CNF-dense (L = 8, C = 128)
Year
Augmentation
# Params.
Error (%)
2016
2015
2015
2015
2013
2013
2013
T+F
T+F
T+F
T+F
T+F
T+F+S+R
T+F
2.5M
1.8M
1.6M
1.3 M
1M
19M
>6 M
5.84
6.05
6.75
7.25
8.81
9.32
9.38
T+F
T+F
2M
21.2M
18.89
7.43
Figure 5: Visualization of mean-squared filter weights in fabrics learned for Part Labels (left) and
CIFAR10 (right, pruned network connections). Layers are laid out horizontally, and scales vertically.
5
Conclusion
We presented convolutional neural fabrics: homogeneous and locally connected trellises over response
maps. Fabrics subsume a large class of convolutional networks. They allow to sidestep the tedious
process of specifying, training, and testing individual network architectures in order to find the best
ones. While fabrics use more parameters, memory and computation than needed for each of the
individual architectures embedded in them, this is far less costly than the resources required to test
all embedded architectures one-by-one. Fabrics have only two main hyper-parameters: the number
of layers and the number of channels. In practice their setting is not critical: we just need a large
enough fabric with enough capacity. We propose variants with dense channel connectivity, and with
channel-doubling over scales. The latter strikes a very attractive capacity/memory trade-off.
In our experiments we study performance of fabrics for image classification on MNIST and CIFAR10,
and of semantic segmentation on Part Labels. We obtain excellent results that are close to the best
reported results in the literature on all three datasets. These results suggest that fabrics are competitive
with the best hand-crafted CNN architectures, be it using a larger number of parameters in some
cases (but much fewer on Part Labels). We expect that results can be further improved by using better
optimization schemes such as Adam [12], using dropout [30] or dropconect [32] regularization, and
using MaxOut units [4] or residual units [6] to facilitate training of deep fabrics with many channels.
In ongoing work we experiment with channel-doubling fabrics, and fabrics for joint image classification, object detection, and segmentation. We also explore channel connectivity patterns in between
the sparse and dense options used here. Finally, we work on variants that are convolutional along the
scale-axis so as to obtain a scale invariant processing that generalizes better across scales.
8
Acknowledgment. We would like to thank NVIDIA for the donation of GPUs used in this research.
This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01).
References
[1] J.-R. Chang and Y.-S. Chen. Batch-normalized maxout network in network. Arxiv preprint, 2015.
[2] T. Chen, I. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowledge transfer. In ICLR,
2016.
[3] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. PAMI,
35(8):1915?1929, 2013.
[4] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML,
2013.
[5] A. Graves, S. Fern?ndez, and J. Schmidhuber. Multi-dimensional recurrent neural networks. In ICANN,
2007.
[6] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
[7] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-fine feature
aggregation. In CVPR, 2016.
[8] G. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: a database for studying
face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts,
Amherst, 2007.
[9] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In ICML, 2015.
[10] A. Kae, K. Sohn, H. Lee, and E. Learned-Miller. Augmenting CRFs with Boltzmann machine shape priors
for image labeling. In CVPR, 2013.
[11] N. Kalchbrenner, I. Danihelka, and A. Graves. Grid long short-term memory. In ICLR, 2016.
[12] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[13] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[14] P. Kulkarni, J. Zepeda, F. Jurie, P. P?rez, and L. Chevallier. Learning the structure of deep architectures
using `1 regularization. In BMVC, 2015.
[15] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Handwritten digit
recognition with a back-propagation network. In NIPS, 1989.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, pages 2278?2324, 1998.
[17] C.-Y. Lee, P. Gallagher, and Z. Tu. Generalizing pooling functions in convolutional neural networks:
Mixed, gated, and tree. 2016.
[18] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014.
[19] S. Liu, J. Yang, C. Huang, , and M.-H. Yang. Multi-objective convolutional learning for face labeling. In
CVPR, 2015.
[20] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
2015.
[21] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In NIPS, 2014.
[22] D. Mishkin and J. Matas. All you need is a good init. In ICLR, 2016.
[23] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross-stich networks for multi-task learning. In CVPR,
2016.
[24] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, 2015.
[25] T. Pfister, J. Charles, and A. Zisserman. Flowing ConvNets for human pose estimation in videos. In CVPR,
2015.
[26] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation.
In Medical Image Computing and Computer-Assisted Intervention, 2015.
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[28] S. Singh, D. Hoiem, and D. Forsyth. Swapout: learning an ensemble of deep architectures. In NIPS, 2016.
[29] J. Springenberg, A. Dosovitskiy andT. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR, 2015.
[30] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 2014.
[31] S. Tsogkas, I. Kokkinos, G. Papandreou, and A. Vedaldi. Deep learning for semantic part segmentation
with high-level guidance. Arxiv preprint, 2015.
[32] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using DropConnect. In ICML, 2013.
[33] H. Zheng, Y. Liu, M. Ji, F. Wu, and L. Fang. Learning high-level prior with convolutional neural networks
for semantic segmentation. Arxiv preprint, 2015.
[34] Y. Zhou, X. Hu, and B. Zhang. Interlinked convolutional neural networks for face parsing. In International
Symposium on Neural Networks, 2015.
9
| 6304 |@word cnn:11 middle:1 version:3 advantageous:1 kokkinos:1 tedious:1 open:1 heuristically:1 hu:1 propagate:1 sgd:1 accommodate:1 necessity:1 liu:3 contains:5 ndez:1 selecting:1 hoiem:1 ours:7 document:1 deconvolutional:2 past:1 recovered:2 activation:19 reminiscent:1 finest:1 parsing:1 shape:2 progressively:2 v:1 fewer:2 short:1 coarse:5 node:16 toronto:1 successive:1 zhang:3 five:1 height:1 along:6 become:1 symposium:1 qualitative:1 consists:4 wild:1 fullyconnected:1 paragraph:1 manner:1 roughly:4 themselves:1 multi:10 compensating:1 salakhutdinov:1 relying:1 automatically:2 increasing:2 conv:1 spain:1 linearity:1 suffice:2 bounded:2 moreover:1 developed:1 temporal:1 every:2 saxena:1 exactly:1 scaled:1 unit:7 medical:1 intervention:1 appear:1 producing:1 danihelka:1 positive:1 understood:1 local:5 vertically:3 aggregating:1 aiming:1 despite:1 bilinear:1 path:10 interpolation:3 approximately:1 pami:1 inria:1 might:2 collect:1 specifying:1 factorization:1 range:1 adoption:1 jurie:1 acknowledgment:1 lecun:4 testing:2 practice:4 implement:4 backpropagation:1 digit:2 riedmiller:1 yan:1 significantly:2 vedaldi:1 ronneberger:2 induce:1 downsampled:1 suggest:1 get:1 close:1 selection:3 operator:15 applying:1 www:1 equivalent:1 map:15 center:2 crfs:1 go:2 independently:1 resolution:16 simplicity:2 immediately:1 rule:1 spanned:1 regularize:1 shlens:1 fang:1 embedding:2 handle:1 variation:1 analogous:1 resp:1 construction:2 homogeneous:3 goodfellow:4 superpixel:3 element:4 approximated:1 recognition:4 sparsely:4 coarser:7 labeled:1 database:1 observed:1 bottom:1 preprint:3 initializing:1 region:2 connected:12 cycle:2 sun:1 ordering:3 trade:1 environment:1 warde:1 trained:4 singh:1 efficiency:1 basis:1 swap:1 joint:1 fabric:71 various:4 emulate:1 train:4 separated:1 effective:2 labeling:3 aggregate:2 hyper:6 neighborhood:5 outside:1 exhaustive:1 caffe:1 jean:1 kalchbrenner:1 widely:1 supplementary:2 larger:6 cvpr:6 anr:1 simonyan:1 fischer:1 itself:1 final:1 net:7 propose:3 subtracting:1 interaction:2 fr:1 neighboring:4 tu:1 amin:1 validate:1 exploiting:2 sutskever:2 double:1 requirement:3 accur:1 darrell:1 produce:1 adam:2 object:2 donation:1 recurrent:2 augmenting:1 pose:2 ij:1 nearest:2 progress:1 dividing:1 implemented:3 c:2 predicted:1 direction:1 cnns:7 filter:13 stochastic:2 exploration:1 human:2 material:2 exchange:1 require:1 activating:1 suffices:3 generalization:1 elementary:2 adjusted:1 assisted:1 hold:2 sufficiently:1 around:2 ground:1 mapping:1 visualize:1 major:1 omitted:1 estimation:2 label:10 jackel:1 hubbard:1 largest:1 repetition:4 create:1 clearly:1 always:2 super:1 zhou:2 varying:1 encode:1 derived:1 ax:2 release:1 properly:1 sequencing:1 contrast:1 dim:1 pad:1 pixel:13 noh:1 classification:11 among:3 html:1 augment:2 denoted:2 art:8 spatial:2 brox:2 field:1 equal:1 sampling:12 manually:1 represents:1 flipped:1 icml:3 nearly:1 future:1 report:5 mirza:1 simplify:1 dosovitskiy:1 grenoble:1 strided:4 oriented:1 randomly:1 densely:4 individual:4 connects:2 detection:2 zheng:2 evaluation:3 henderson:1 farley:1 configure:2 chain:8 accurate:1 edge:13 cifar10:7 orthogonal:1 tree:2 iv:1 re:6 desired:4 guidance:1 column:3 earlier:1 papandreou:1 cost:3 fusing:1 entry:1 subset:1 uniform:1 krizhevsky:4 successful:1 pal:1 reported:3 connect:1 params:3 international:1 amherst:1 systematic:1 lee:5 receiving:1 off:1 together:2 connectivity:14 augmentation:8 squared:1 opposed:1 wan:3 possibly:2 huang:2 dropconnect:3 worse:2 corner:1 convolving:1 sidestep:2 szegedy:1 distribute:1 stride:3 persyval:1 forsyth:1 caused:1 explicitly:1 vi:1 stream:2 lab:1 analyze:2 red:1 competitive:3 relus:3 recover:1 parallel:3 option:1 annotation:1 aggregation:1 contribution:2 accuracy:3 convolutional:27 efficiently:1 ensemble:4 correspond:3 yield:1 miller:2 generalize:2 handwritten:2 mishkin:2 vincent:1 fern:1 ren:1 rectified:1 finer:2 reach:1 sharing:2 farabet:1 definition:1 naturally:1 rbm:1 couple:1 dataset:8 massachusetts:1 knowledge:2 segmentation:14 organized:2 back:6 higher:1 follow:1 response:11 improved:1 bmvc:1 zisserman:2 flowing:1 execute:1 just:1 biomedical:1 convnets:1 canvas:1 hand:3 receives:1 horizontal:2 lack:4 propagation:4 widespread:1 grows:7 usage:1 omitting:1 effect:1 contain:1 normalized:2 facilitate:1 regularization:5 excluded:1 semantic:11 attractive:1 during:2 branching:1 kriz:1 hong:1 trying:1 crf:1 demonstrate:3 upwards:1 image:30 charles:1 inrialpes:1 common:1 rotation:1 ji:1 exponentially:6 extend:1 he:1 yosinski:1 ai:1 tuning:1 unconstrained:1 grid:1 similarly:1 moving:1 han:1 etc:1 add:2 base:1 chan:1 recent:1 irrelevant:1 driven:1 massively:1 schmidhuber:1 certain:1 misra:2 nvidia:1 success:1 minimum:1 additional:3 care:1 determine:2 paradigm:1 aggregated:1 strike:1 signal:7 ii:1 multiple:3 full:1 harchaoui:1 reduces:1 encircled:1 positioning:1 technical:1 adapt:1 cross:2 long:4 cifar:2 lin:3 feasibility:1 schematic:1 verbeek:2 variant:5 prediction:2 hair:1 crop:1 vision:2 lfw:1 arxiv:3 kernel:2 represent:2 normalization:2 pyramid:3 achieved:1 addition:2 background:1 fine:5 laboratoire:1 diagram:1 appropriately:1 explodes:1 pooling:19 effectiveness:1 coarsening:1 near:2 leverage:2 yang:2 intermediate:3 iii:1 enough:5 split:2 bengio:2 variety:1 affect:1 relu:5 andt:1 architecture:42 haffner:1 vgg:4 shift:4 six:1 accelerating:2 padding:2 nine:3 cnf:8 deep:8 ignored:1 generally:2 useful:1 clear:1 amount:2 repeating:1 ten:2 locally:3 sohn:1 generate:1 specifies:1 http:2 exist:1 notice:1 per:12 correctly:1 blue:1 discrete:1 four:4 demonstrating:1 clarity:2 prevent:1 vast:1 fuse:1 graph:1 sum:7 year:3 swapout:2 you:1 springenberg:3 throughout:2 laid:2 architectural:4 wu:1 patch:1 scaling:2 comparable:3 dropout:4 layer:65 entirely:1 followed:2 guaranteed:1 courville:1 copied:2 strength:1 precisely:1 alex:2 scene:1 c2s:1 extremely:1 span:3 pruned:2 coarsest:2 gpus:1 structured:9 across:16 beneficial:1 strives:1 smaller:2 slightly:2 invariant:1 iccv:1 taken:1 resource:1 visualization:2 remains:1 discus:1 needed:1 weaving:1 stitching:1 end:2 studying:1 available:1 operation:1 generalizes:1 apply:2 observe:1 hierarchical:1 denker:1 upto:1 striding:1 batch:3 convolved:3 original:2 top:2 convolve:1 remaining:2 include:1 denotes:1 zeiler:1 log2:1 kuntzmann:1 appreciate:1 move:1 skin:1 matas:2 objective:1 flipping:3 strategy:1 receptive:1 costly:1 chevallier:1 gradient:1 iclr:7 link:1 thank:1 upsampling:1 capacity:3 seven:1 viii:1 copying:2 illustration:3 configures:1 negative:3 honari:1 ba:1 design:3 implementation:1 boltzmann:1 perform:1 gated:1 vertical:2 convolution:17 datasets:5 benchmark:1 ramesh:1 howard:1 behave:1 najman:1 subsume:2 hinton:2 precise:1 jakob:1 required:1 connection:14 imagenet:2 learned:6 quadratically:1 boser:1 barcelona:1 kingma:1 nip:5 address:1 beyond:1 pattern:7 interlinked:1 green:1 memory:8 including:2 max:9 video:2 deactivated:1 overlap:2 critical:2 difficulty:1 predicting:1 residual:4 scheme:1 axis:6 schmid:1 epoch:1 literature:3 prior:3 graf:2 embedded:10 fully:3 expect:1 mixed:1 proven:1 acyclic:1 validation:5 shelhamer:1 labex:1 sufficient:1 vectorized:1 ckn:1 thresholding:1 share:1 translation:2 row:4 eccv:1 elsewhere:3 supported:1 last:9 copy:6 hebert:1 bias:2 allow:3 understand:1 wide:1 neighbor:3 face:5 sparse:20 dimension:7 depth:1 evaluating:1 concretely:1 far:1 pruning:2 ignore:1 active:1 koniusz:1 ioffe:1 mairal:1 summing:1 overfitting:1 fergus:1 search:1 continuous:1 table:10 channel:80 transfer:1 init:1 shrivastava:1 excellent:1 bottou:1 domain:1 protocol:2 did:1 sp:1 main:2 dense:22 linearly:5 spread:1 border:1 icann:1 n2:1 crafted:2 ensembling:1 elaborate:1 embeds:1 trellis:15 sub:4 momentum:1 exponential:2 jmlr:1 ix:1 rez:1 down:2 removing:1 embed:1 covariate:1 explored:1 decay:1 striving:1 gupta:1 weakest:1 deconvolution:1 exists:1 mnist:5 sequential:1 adding:1 magnitude:1 gallagher:1 illustrates:1 chen:3 vii:1 generalizing:1 explore:2 horizontally:3 stitch:1 contained:1 partially:2 doubling:10 kae:2 chang:3 corresponds:1 truth:1 determines:1 labx:1 identity:2 towards:3 maxout:10 couprie:1 experimentally:1 stich:1 determined:1 except:1 reducing:1 averaging:1 distributes:1 total:3 pfister:1 experimental:4 select:3 berg:1 internal:3 latter:3 avoiding:1 kulkarni:2 ongoing:1 audio:1 trainable:1 scratch:1 srivastava:1 |
5,864 | 6,305 | Linear Feature Encoding for Reinforcement Learning
Zhao Song, Ronald Parr? , Xuejun Liao, Lawrence Carin
Department of Electrical and Computer Engineering
?
Department of Computer Science
Duke University, Durham, NC 27708, USA
Abstract
Feature construction is of vital importance in reinforcement learning, as the quality
of a value function or policy is largely determined by the corresponding features.
The recent successes of deep reinforcement learning (RL) only increase the importance of understanding feature construction. Typical deep RL approaches use
a linear output layer, which means that deep RL can be interpreted as a feature
construction/encoding network followed by linear value function approximation.
This paper develops and evaluates a theory of linear feature encoding. We extend
theoretical results on feature quality for linear value function approximation from
the uncontrolled case to the controlled case. We then develop a supervised linear
feature encoding method that is motivated by insights from linear value function
approximation theory, as well as empirical successes from deep RL. The resulting
encoder is a surprisingly effective method for linear value function approximation
using raw images as inputs.
1
Introduction
Feature construction has been and remains an important topic for reinforcement learning. One of
the earliest, high profile successes of reinforcement learning, TD-gammon [1], demonstrated a huge
performance improvement when expert features were used instead of the raw state, and recent years
have seen a great deal of practical and theoretical work on understanding feature selection and
generation for linear value function approximation [2?5].
More recent practical advances in deep reinforcement learning have initiated a new wave of interest in
the combination of neural networks and reinforcement learning. For example, Mnih et al. [6] described
a reinforcement learning (RL) system, referred to as Deep Q-Networks (DQN), which learned to
play a large number of Atari video games as well as a good human player. Despite these successes
and, arguably because of them, a great deal of work remains to be done in understanding the role of
features in RL. It is common in deep RL methods to have a linear output layer. This means that there
is potential to apply the insights gained from years of work in linear value function approximation to
these networks, potentially giving insight to practitioners and improving the interpretability of the
results. For example, the layers preceding the output layer could be interpreted as feature extractors
or encoders for linear value function approximation.
As an example of the connection between practical neural network techniques and linear value
function approximation theory, we note that Oh et al. [7] introduced spatio-temporal prediction
architectures that trained an action-conditional encoder to predict next states, leading to improved
performance on Atari games. Oh et al. cited examples of next state prediction as a technique used in
neural networks in prior work dating back several decades, though this approach is also suggested by
more recent linear value function approximation theory [4].
In an effort to extend previous theory in a direction that would be more useful for linear value function
approximation and, hopefully, lead to greater insights into deep RL, we generalize previous work
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
on analyzing features for uncontrolled linear value function approximation [4] to the controlled
case. We then build on this result to provide a set of sufficient conditions which guarantee that
encoded features will result in good value function approximation. Although inspired by deep RL,
our results (aside from one negative one in Section 3.2 ) apply most directly to the linear case, which
has been empirically explored in Liang et al. [8]. This implies the use of a rich, original (raw) feature
space, such as sequences of images from a video game without persistent, hidden state. The role of
feature encoding in such cases is to find a lower dimensional representation that is suitable for linear
value function approximation. Feature encoding is still needed in such cases because the raw state
representation is so large that it is impractical to use directly.
Our approach works by defining an encoder and a decoder that use a lower dimensional representation
to encode and predict both reward and next state. Our results differ from previous results [4] in linear
value function approximation theory that provided sufficient conditions for good approximation.
Specifically, our results span two different representations, a large, raw state representation and
a reduced one. We propose an efficient coordinate descent algorithm to learn parameters for the
encoder and decoder. To demonstrate the effectiveness of this approach, we consider the challenging
(for linear techniques) problem of learning features from raw images in pendulum balancing and
blackjack. Surprisingly, we are able to discover good features and learn value functions in these
domains using just linear encoding and linear value function approximation.
2
Framework and Notation
Markov Decision Processes (MDPs) can be represented as a tuple hS, A, R, P, ?i, where S =
{s1 , s2 , . . . , sn } is the state set, A = {a1 , a2 , . . . , am } is the action set, R ? Rnm?1 represents the
reward function whose element R(si , aj ) denotes the expected immediate reward when taking action
ajin state si, P ? Rnm?n denotes the transition probabilities of underlying states whose element
P (si , a), sj is the probability of transiting from state si to state sj when taking an action a, and
? ? [0, 1) is the discount factor for the future reward. The policy ? in an MDP can be
P represented in
terms of the probability of taking action a when in state s, i.e., ?(a|s) ? [0, 1] and a ?(a|s) = 1.
nm?nm
?
Given a policy ?, we define
as the transition probability for the state-action pairs,
P ? 0R
? 0 0
where P (s , a |s, a) = P (s, a), s ?(a0 |s0 ). For any policy ?, its Q-function is defined over the
state-action pairs, where Q? (s, a) represents the expected total ??discounted rewards when taking
action a in state s and following ? afterwards. For the state-action pair (s, a), the Q?function
satisfies the following Bellman equation:
h X
i
Q? s, a = R(s, a) + ?
P ? (s0 , a0 |s, a) Q? s0 , a0
(1)
s0 ,a0
2.1
The Bellman operator
We define the Bellman operator T ? on the Q?functions as
h X
i
(T ? Q)(s, a) = R(s, a) + ?
P ? (s0 , a0 |s, a) Q s0 , a0 .
s0 ,a0
Q? is known to be a fixed point of T ? , i.e., T ? Q? = Q? . Of particular interest in this paper is the
b? ) = T ? Q
b? ? Q
b ? . When
Bellman error for an approximated Q-function to Q? , specifically BE(Q
the Bellman error is 0, the Q-function is at the fixed point. Otherwise, we have [9]:
b ? ? Q? k? ? kQ
b? ? T ? Q
b ? k? / (1 ? ?),
kQ
where kxk? refers to the `? norm of a vector x.
2.2
Linear Approximation
When the Q-function cannot be represented exactly, we can approximate it with a linear function
b ? (s, a) = ?w? , with ? = [?(s1 , a1 ) . . . ?(sn , am )]T ? Rnm?km , ?(si , aj ) ? Rkm?1 is a
as Q
?
?
feature vector for state si and action aj , superscript T represents matrix transpose, and w?
? Rkm?1
is the weight vector.
2
?
Given the features ?, the linear fixed point methods [10?12] aim to estimate w?
, by solving the
following fixed-point equation:
?
?
?w?
= ?(R + ?P ? ?w?
)
(2)
where ? = ?(?T ?)?1 ?T is the orthogonal `2 projector on span(?). Solving (2) leads to the
following linear fixed-point solution:
?
w?
= (?T ? ? ??T P ? ?)?1 ?T R.
2.3
Feature Selection/Construction
There has been great interest in recent years in automating feature selection or construction for
reinforcement learning. Research in this area has typically focused on using a linear value function
approximation method with a feature selection wrapper.
Parr et al. [2] proposed using the Bellman error to generate new features, but this approach did not
scale well in practice. Mahadevan and Maggioni [3] explored a feature generation approach based
upon the Laplacian of a connectivity graph of the MDP. This approach has many desirable features,
though it did not connect directly to the optimization problem implied by the MDP and could produce
worthless features in pathological cases [4].
Geramifard et al. [13] and Farahmand and Precup [14] consider feature construction where features
are built up through composition of base or atomic features. Such approaches are reminiscent of
classical approaches to features construction. They can be useful, but they can also be myopic if the
needed features are not reachable through chains of simpler features where each step along the chain
is a demonstrable improvement.
Feature selection solves a somewhat different problem from feature construction. Feature selection
assumes that a reasonable set of candidate features are presented to the learner, and the learner?s task is
to find the good ones from a potentially large set of mostly worthless or redundant ones. LASSO [15]
and Orthogonal Matching Pursuit (OMP) [16] are methods of feature selection for regression that
have been applied to reinforcement learning [17, 5, 18, 19]. In practice, these approaches do require
that good features are present within the larger set, so they do not address the question of how to
generate good features in the first place.
3
Theory for Feature Encoding
Previous work demonstrated an equivalence between linear value function approximation and linear
model approximation [20, 21, 4], as well as the relationship between errors in the linear model and
the Bellman error for the linear fixed point [4]. Specifically, low error in the linear model could imply
low Bellman error in the linear fixed point approximation. These results were for the uncontrolled
case. A natural extension of these results would be to construct features for action-conditional linear
models, one for each action, and use those features across multiple policies, i.e., through several
iterations of policy iteration. Anecdotally, this approach seemed to work well in some cases, but there
were no theoretical results to justify it. The following example demonstrates that features which are
sufficient for perfect linear action models and reward models, may not suffice for perfect linear value
function approximation.
Example 1. Consider an MDP with a single feature ?(x) = x, two actions that have no effect,
p(x|x, a1 ) = 1.0 and p(x|x, a2 ) = 1.0, and with R(x, a1 ) = x and R(x, a2 ) = ?x. The single
feature ? is sufficient to construct a linear predictor of the expected next state and reward. However,
the value function is not linear in ? since V ? (x) = |x| / (1 ? ?).
The significance of this example is that existing theory on the connection between linear models
and linear features does not provide sufficient conditions on the quality of the features for model
approximation that would ensure good value function approximation for all policies. Existing theory
also does not extend to provide a connection between the model error for a set of features and
the Bellman error of a Q-function based on these features. To make this connection, the features
must be thought of as predicting not only expected next features, but expected next feature-action
combinations. Below, we extend the results of Parr et al. [4] to Q-functions and linear state-action
models.
3
The linear model Similar to Parr et al. [4], we approximate the reward R and the expected policyconditional next feature P ? ? in the controlled case, using the following linear model:
?
R
??
[
P
=
?r? = ?(?T ?)?1 ?T R
=
?P??
T
= ?(? ?)
?1
T
(3a)
?
? P ?.
(3b)
b?
Since Q = ?w for some w, the fixed-point equation in (1) becomes
?w
w
=
?r? + ??P?? w
(4a)
=
?P?? )?1 r?
(4b)
(I ?
Lemma 2. For any MDP M with features ? and policy ? represented as the fixed point of the
approximate Q?function, the linear-model solution and the linear fixed-point solution are the same.
Proof: See Supplemental Materials.
To analyze the error in the controlled case, we define the Bellman error for the state-value function,
given a policy ? as
h X
i
b ? (s, a) = R(s, a) + ?
b ? s0 , a0 ? Q
b ? (s, a).
BE Q
P ? (s0 , a0 |s, a)Q
s0 ,a0
As a counterpart to Parr et al. [4], we introduce the following reward error and policy-conditional
per-feature error, in the controlled case as
?R
???
? = R ? ?r?
= R?R
?
[
= P ??P
?? = P ? ? ? ?P?? .
(5a)
(5b)
Theorem 3. For any MDP M with feature ?, and policy ? represented as the fixed point of the
approximate Q?function, the Bellman error can be represented as
?
b ? = ?R + ???? w?
BE Q
.
Proof: See Supplemental Materials.
Theorem 3 suggests a sufficient condition for a good set of features: If the model prediction error ??? ,
and reward prediction error ?R are low, then the Bellman error must also be low. Previous work did
not give an in-depth understanding of how to construct such features. In Parr et al. [2], the Bellman
error is defined only on the training data. Since it is orthogonal to the span of the existing features,
there is no convenient way to approximate it, and the extension to off-sample states is not obvious.
They used locally weighted regression with limited success, but the process was slow and prone to the
usual perils of non-parametric approximators, such as high sensitivity to the distance function used.
One might hope to minimize (5a) and (5b) directly, perhaps using sampled states and next states, but
this is not a straightforward optimization problem to solve in general, because the search space for
? is the space of functions and because ? appears inconveniently on both sides of 5(b) making it
difficult rearrange terms to solve for ? as an optimization problem with a fixed target. Thus, without
additional assumptions about how the states are initially encoded and what space of features will be
searched, it is challenging to apply Theorem 3 directly. Our solution to this difficulty is to apply the
theorem in a somewhat indirect manner: First we assume that the input is a rich, raw feature set (e.g.,
images) and that the primary challenge is reducing the size of the feature set rather than constructing
more elaborate features. Next, we restrict our search space for ? to the space of linear encodings of
these raw features. Finally, we require that these encoded features are predictive of next raw features
rather than next encoded features. This approach differs from what Theorem 3 requires but it results
in an easier optimization problem and, as shown below, we are able to use Theorem 3 to show that
this alternative condition is sufficient to represent the true value function.
We now present a theory of predictively optimal feature encoding. We refer to the features that
ultimately are used by a linear value function approximation step using the familiar ? notation, and
we refer to the inputs before feature encoding as the raw features, A. For n samples and l raw features,
we can think of A as an nm ? lm matrix. For every row in A, only the block corresponding to the
action taken is non-zero. The raw features are operated on by an encoder:
4
Definition 4. The encoder, E? (or E? in the linear case) is a transformation E? (A) = ?. We use the
notation E? because we think of it as encoding the raw state. When the encoder is linear, E? = E? ,
where E? is an lm ? km matrix that right multiplies A, AE? = ?.
We want to encode a reduced size representation of the raw features sufficient to predict the next
expected reward and raw features because, as proven below, doing so is a sufficient (though not
necessary) condition for good linear value function approximation. Prediction of next raw feature
and rewards is done via a decoder, which is a matrix in this paper, but could be non-linear in general:
Definition 5. The decoder, D, is a km ? (lm + 1) matrix predicting [P ? A, R] from E? (A).
This approach is distinct from the work of Parr et al. [4] for several reasons. We study a set of
conditions on a reduced size feature set and study the relationship between the reduced feature set and
the original features, and we provide an algorithm in the next section for constructing these features.
Definition 6. ? = E? (A) is predictively optimal with respect to A and ? if there exists a D? such
that E? (A)D? = [P ? A, R].
3.1
Linear Encoder and Linear Decoder
In the linear case, a predictively optimal set of features satisfies:
AE? D? = AE? [D?s , D?r ] = [P ? A, R]
(6)
s
r
where D? and D? represent the first lm columns and the last column of D? , respectively.
Theorem 7. For any MDP M with predictively optimal ? = AE? for policy ?, if the linear fixed
b ? , BE(Q
b ? ) = 0.
point for ? is Q
Proof: See Supplemental Materials.
3.2
Non-linear Encoder and Linear Decoder
One might expect that the results above generalize easily to the case where a more powerful encoder
is used. This could correspond, for example, to a deep network with a linear output layer used for
value function approximation. Surprisingly, the generalization is not straightforward:
Theorem 8. The existence of a non-linear encoder E and linear decoder D such that E(A)D =
[P ? A, R] is not sufficient to ensure predictive optimality of ? = E(A).
Proof: See Supplemental Materials.
This negative result doesn?t shut the door on combining non-linear encoders with linear decoders.
Rather, it indicates that additional conditions beyond those needed in the linear case are required
to ensure optimal encoding. For example, requiring that the encoded features lie in an invariant
subspace of P ? [4] would be a sufficient condition (though of questionable practicality).
4
Iterative Learning of Policy and Encoder
In practice we do not have access to P ? , but do have access to the raw feature representation of
sampled states and sampled next states. To train the encoder E? and decoder D? , we sample states
and next states from a data collection policy. When exploration is not the key challenge, this can be
done with a single data collection run using a policy that randomly selects actions (as is often done
with LSPI [22]). For larger problems, it may be desirable to collect additional samples as the policy
changes. These sampled states and next states are represented by matrices A? and A0 , respectively.
Theorem 7 suggests that given a policy ?, zero Bellman error can be achieved if features are encoded
appropriately. Subsequently, the obtained features and resulting Q-functions can be used to update
the policy, with an algorithm such as LSPI. In a manner similar to the policy update in LSPI, the
non-zero blocks in A0 are changed accordingly after a new policy is learned. With the updated A0 , we
re-learn the encoder and then repeat the process, as summarized in Algorithm 1. It may be desirable
b ? since the encoded features may still be useful
to update the policy several times while estimating Q
if the policy has not changed much. Termination conditions for this algorithm are typical approximate
policy iteration termination conditions.
4.1
Learning Algorithm for Encoder
5
In our implementation, the encoder E?
and decoder D? are jointly learned us- Algorithm 1 Iterative Learning of Encoder and Policy
while Termination Conditions Not Satisfied do
ing Algorithm 2, which seeks to minimize
0
?
Learn the encoder E? and decoder D?
kAE? D? ? [A , R]kF by coordinate deb?
Estimate Q
scent [23], where kXkF represents the
Update the next raw state A0 , by changing the poFrobenius norm of a matrix X. Note that A?
sition of non-zero blocks according to the greedy
can be constructed as a block diagonal mab? .
policy for Q
trix, where each block corresponds to the
end while
samples from each action. Subsequently,
the pseudoinverse of A? in Algorithm 2 can
?
be efficiently computed, by operating on the pseudoinverse of each block in A.
Algorithm 2 alternatively updates E? and D? until one of the following conditions is met: (1) the
?
D? ?[A0 ,R]kF
number of iterations reaches the maximally allowed one; (2) the residual kAE?k[A
is below
0 ,R]k
F
a threshold; (3) the current residual is greater than the previous residual. For regularization, we use
the truncated singular value decomposition (SVD) [24] when taking the pseudo-inverses of A? to
?
discard all but the top k singular vectors in each block of A.
Algorithm 2 is based on a linear encoder
and a linear decoder. Consequently, one
may notice that the value function is also
linear in the domain of the raw features, i.e.,
the value function can be represented as
b ? = A? E? w = Aw
? 0 with w0 = E? w.
Q
One may wonder, why it is not better to
solve for w0 directly with regularization
on w0 ? Although it is impractical to do
this using batch linear value function approximation methods, due to the size of the
feature matrix, one might argue that on-line
approaches such as deep RL techniques approximate this approach by stochastic gradient descent. To the extent this characterization is accurate, it only increases the importance of
having a clear understanding of feature encoding as an important sub-problem, since this is the natural
interpretation of everything up to the final layer in such networks and is even an explicit objective in
some cases [7].
Algorithm 2 Linear Feature Discovery
? A0 , R, k )
L INEAR E NCODER F EATURES ( A,
D? ? rand(km, lm + 1)
while Convergence Conditions Not Satisfied do
E? ? A?? [A0 , R]D??
? ? )? [A0 , R]
D? ? (AE
end while
return E?
See text for termination conditions.
rand represents samples from uniform [0, 1].
? is the (truncated) Moore-Penrose pseudoinverse.
5
Experiments
The goal of our experiments is to show that the model of and algorithms for feature encoding presented
above are practical and effective. The use of our encoder allows us to learn good policies using linear
value function approximation on raw images, something that is not generally perceived to be easy
to do. These experiments should be viewed as validating this approach to feature encoding, but not
competing with deep RL methods, which are non-linear and use far greater computational resources.
We implement our proposed linear encoder-decoder model and, for comparison, the random projection
model in Ghavamzadeh et al. [25]. We tested them on the Inverted Pendulum and Blackjack [26],
two popular benchmark domains in RL. Our test framework creates raw features using images, where
the elements in the non-zero block of A? correspond to an image that has been converted to vector by
concatenating the rows of the image. For each problem, we run Algorithm 1 50 times independently to
account for the randomness in the training data. Our training data are formed by running a simulation
for the desired number of steps and choosing actions at random. For the encoder, the number of
features k is selected over the validation set to achieve the best performance. All code is written in
MATLAB and tested on a machine with 3.1GHz CPU and 8GB RAM. Our test results show that
Algorithm 1 cost at most half an hour to run, for the inverted pendulum and blackjack problems.
?
To verify that the encoder is doing something interesting, rather than simply picking features from A,
?
we also tried a greedy, sparse reinforcement learning algorithm, OMP-TD [5] using A as the candidate
feature set. Our results, however, showed that OMP-TD?s performance was much worse than the
approach using linear encoder. We skip further details on OMP-TD?s performance for conciseness.
6
5.1
Inverted Pendulum
We used a version of the inverted pendulum adapted from Lagoudakis and Parr [22], a continuous
control problem with 3 discrete actions, left, right, or nothing, corresponding to the force applied to
a cart on an infinite rail upon which an inverted pendulum is mounted. The true state is described
by two continuous variables, the angle and angular velocity of the pendulum. For the version of the
problem used here, there is a reward of 0 for each time step the pendulum is balanced, and a penalty
of ?1 for allowing the pendulum to fall, after which the system enters an absorbing state with value
0. The discount factor is set to be 0.95.
For the training data, we collected a desired number of trajectories with starting angle and angular
velocity sampled uniformly on [?0.2, 0.2]. These trajectories were truncated after 100 steps if the
pendulum had not already fallen. Algorithm 2 did not see the angle or angular velocity. Instead,
the algorithm was given two successive, rendered, grayscale images of the pendulum. Each image
has 35 ? 52 pixels and hence the raw state is a 35 ? 52 ? 2 = 3640 dimensional vector. To ensure
that these two images are a Markovian representation of the state, it was necessary to modify the
simulator. The original simulator integrated the effects of gravity and the force applied over the time
step of the simulator. This made the simulation more accurate, but has the consequence that the
change in angle between two successive time steps could differ from the angular velocity. We forced
the angular velocity to match the change in angle per time step, thereby making the two successive
images a Markovian state.
We compare the linear encoder with the features using radial basis functions (RBFs) in Lagoudakis
and Parr [22], and the random projection in Ghavamzadeh et al. [25]. The learned policy was then
evaluated 100 times to obtain the average number of balancing steps. For each episode, a maximum
of 3000 steps is allowed to run. If a run achieves this maximum number, we claim it as a success and
count it when computing the probability of success. We used k = 50 features for both linear encoder
and random projection.
Steps
Probability of success
1
3000
Figure 1 shows the
0.9
results with means
2500
0.8
and 95% confidence
0.7
intervals, given dif- 2000
0.6
ferent numbers of
0.5
1500
0.4
training
episodes,
1000
0.3
where Encoder??
0.2
corresponds to the
500
0.1
version of Algorithm 1
0
0
200
400
600
800
1000
200
400
600
800
1000
with ? changes in the
Number of training episodes
Number of training episodes
encoder. We observe
(a)
(b)
that for most of the
points, our proposed
Figure 1: (a) Number of balancing steps and (b) prob. of success, vs. number
encoder
achieves
of training episodes.
better performance
than RBFs and random projections, in terms of both balancing steps and the probability of success.
This is a remarkable result because the RBFs had access to the underlying state, while the encoder
was forced to discover an underlying state representation based upon the images. Moreover,
Encoder?2 achieves slightly better performance than Encoder?1 in most of the testing points. We
also notice that further increasing ? did not bring any obvious improvement, based on our test.
Encoder-2
Encoder-1
RBF
Random Projection
5.2
Encoder-2
Encoder-1
RBF
Random Projection
Blackjack
There are 203 states in this problem, so we can solve directly for the optimal value V ? and the
optimal policy ? ? explicitly. The states from 1-200 in this problem can be completely described by
the information from the ace status (usable or not), player?s current sum (12-21), and dealer?s one
showing card (A-10). The terminal states 201- 203 correspond to win, lose, and draw, respectively.
We set k = 203 features for the linear encoder.
To represent raw states for the encoder, we use three concatenated sampled MNIST digitsand hence a
raw state is a 28 ? 28 ? 3 = 2352 dimensional vector. Two examples of such raw states are shown in
Figure 2. Note that three terminal states are represented by ?300?, ?400?, and ?500?, respectively.
7
The training data are formed by executing the random policy with the desired number of episodes.
Our evaluation metrics for a policy represented by the value V and the corresponding action a are
Relative value error = kV ? V ? k2 / kV ? k2 , Action error = ka ? a? k0 .
We compare the features discovered by the linear encoder and random projection against indicator functions on the true state, since such indicator features should be the gold standard.
We can make the encoder and random projection?s tasks more challenging by adding noise to the raw state. Although it is not guaranteed in general (Example 1), it suffices to learn a single encoder that
(a)
(b)
persisted across policies for this problem, so we report results for a
single set of encoded features. We denote the algorithms using linear
encoder as Encoder-Image-? and the algorithms using random Figure 2: Two examples of
projection as Random-Image-?, where ? is the number of possible the blackjack state rendered as
images used for each digit. For example ? = 10 means that the three MNIST digits.
image for each digit is randomly selected from the first 10 images in the MNIST training dataset.
Figure 3 shows the
surprising result that
65
Encoder-Image-1
60
1
55
and Random-Image-1
50
0.8
achieve superior per45
formance to indicator
40
0.6
functions on the true
35
30
0.4
state when the number
25
of training episodes
20
0.2
2000
4000
6000
8000
10000
2000
4000
6000
8000
10000
is less than or equal
Number of training episodes
Number of training episodes
to 6000. In this case,
(a)
(b)
the encoded state representation wound up
Figure 3: (a) Relative value error and (b) action error, as functions of the
having less than 203
number of training episodes. An additional plot for the actual return is
effective parameters
provided in Supplemental Materials.
because the SVD in
the pseudoinverse found lower dimensional structure that explained most of the variation and
discarded the rest as noise because the singular values were below threshold. This put the encoder in
the favorable side of the bias-variance trade off when training data were scarce. When the number of
training episodes becomes larger, the indicator function outperforms the linear encoder, which is
consistent with its asymptotically optimal property. Furthermore, the performance of the encoder
becomes worse as ? is larger. This matches our expectation that a larger ? means that a state would
be mapped to more possible digits and thus extracting features for the same state becomes more
difficult. Finally, we notice that our proposed encoder is more robust to noise, when compared with
random projection: Encoder-Image-10 outperforms Random-Image-10 with remarkable margins,
measured in both relative value error and action error.
75
Random-Image-10
Encoder-Image-10
Random-Image-1
Encoder-Image-1
Indicator
6
Random-Image-10
Encoder-Image-10
Random-Image-1
Encoder-Image-1
Indicator
70
Action error
Relative value error
1.2
Conclusions and Future Work
We provide a theory of feature encoding for reinforcement learning that provides guidance on how
to reduce a rich, raw state to a lower-dimensional representation suitable for linear value function
approximation. Our results are most compelling in the linear case, where we provide a framework
and algorithm that enables linear value function approximation using a linear encoding of raw images.
Although our framework aligns with practice for deep learning [7], our results indicate that future
work is needed to elucidate the additional conditions that are needed to extend theory to guarantee
good performance in the non-linear case.
Acknowledgements
We thank the anonymous reviewers for their helpful comments and suggestions. This research was
supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.
8
References
[1] G. Tesauro, ?TD-Gammon, a self-teaching backgammon program, achieves master-level play,? Neural
Computation, 1994.
[2] R. Parr, C. Painter-Wakefield, L. Li, and M. Littman, ?Analyzing feature generation for value-function
approximation,? in ICML, 2007.
[3] S. Mahadevan and M. Maggioni, ?Proto-value functions: A Laplacian framework for learning representation
and control in Markov decision processes,? JMLR, 2007.
[4] R. Parr, L. Li, G. Taylor, C. Painter-Wakefield, and M. L. Littman, ?An analysis of linear models, linear
value-function approximation, and feature selection for reinforcement learning,? in ICML, 2008.
[5] C. Painter-wakefield and R. Parr, ?Greedy algorithms for sparse reinforcement learning,? in ICML, 2012.
[6] V. Mnih et al., ?Human-level control through deep reinforcement learning,? Nature, 2015.
[7] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh, ?Action-conditional video prediction using deep networks
in Atari games,? in NIPS, 2015.
[8] Y. Liang, M. C. Machado, E. Talvitie, and M. Bowling, ?State of the art control of Atari games using
shallow reinforcement learning,? in AAMAS, 2016.
[9] R. J. Williams and L. C. Baird III, ?Tight performance bounds on greedy policies based on imperfect value
functions,? Northeastern University, Tech. Rep., 1993.
[10] R. S. Sutton, ?Learning to predict by the method of temporal differences,? Machine Learning, 1988.
[11] S. Bradtke and A. Barto, ?Linear least-squares algorithms for temporal difference learning,? Machine
learning, 1996.
[12] H. Yu and D. P. Bertsekas, ?Convergence results for some temporal difference methods based on least
squares,? IEEE TAC, 2009.
[13] A. Geramifard, T. J. Walsh, N. Roy, and J. How, ?Batch iFDD: A scalable matching pursuit algorithm for
solving MDPs,? in UAI, 2013.
[14] A. M. Farahmand and D. Precup, ?Value pursuit iteration,? in NIPS, 2012.
[15] R. Tibshirani, ?Regression shrinkage and selection via the Lasso,? JRSSB, 1996.
[16] S. G. Mallat and Z. Zhang, ?Matching pursuits with time-frequency dictionaries,? IEEE TSP, 1993.
[17] J. Z. Kolter and A. Y. Ng, ?Regularization and feature selection in least-squares temporal difference
learning,? in ICML, 2009.
[18] M. Petrik, G. Taylor, R. Parr, and S. Zilberstein, ?Feature selection using regularization in approximate
linear programs for Markov decision processes,? in ICML, 2010.
[19] J. Johns, C. Painter-Wakefield, and R. Parr, ?Linear complementarity for regularized policy evaluation and
improvement,? in NIPS, 2010.
[20] R. Schoknecht, ?Optimality of reinforcement learning algorithms with linear function approximation,? in
NIPS, 2002.
[21] R. S. Sutton, C. Szepesv?ri, A. Geramifard, and M. H. Bowling, ?Dyna-style planning with linear function
approximation and prioritized sweeping,? in UAI, 2008.
[22] M. Lagoudakis and R. Parr, ?Least-squares policy iteration,? JMLR, 2003.
[23] S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge University Press, 2004.
[24] P. C. Hansen, ?The truncated SVD as a method for regularization,? BIT Numerical Mathematics, 1987.
[25] M. Ghavamzadeh, A. Lazaric, O. Maillard, and R. Munos, ?LSTD with random projections,? in NIPS,
2010.
[26] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction.
9
The MIT Press, 1998.
| 6305 |@word h:1 version:3 norm:2 termination:4 km:4 seek:1 simulation:2 tried:1 decomposition:1 dealer:1 thereby:1 wrapper:1 outperforms:2 existing:3 current:2 ka:1 surprising:1 si:6 reminiscent:1 must:2 written:1 john:1 ronald:1 numerical:1 enables:1 plot:1 update:5 aside:1 v:1 greedy:4 selected:2 half:1 shut:1 accordingly:1 talvitie:1 characterization:1 provides:1 successive:3 simpler:1 zhang:1 along:1 constructed:1 persistent:1 farahmand:2 scent:1 manner:2 introduce:1 expected:7 planning:1 simulator:3 terminal:2 bellman:14 inspired:1 discounted:1 td:5 cpu:1 actual:1 increasing:1 becomes:4 spain:1 provided:2 discover:2 notation:3 underlying:3 suffice:1 moreover:1 estimating:1 what:2 atari:4 interpreted:2 supplemental:5 transformation:1 impractical:2 guarantee:2 temporal:5 pseudo:1 every:1 questionable:1 gravity:1 exactly:1 demonstrates:1 k2:2 control:4 arguably:1 bertsekas:1 before:1 engineering:1 modify:1 consequence:1 despite:1 encoding:18 sutton:3 analyzing:2 initiated:1 might:3 equivalence:1 suggests:2 challenging:3 collect:1 dif:1 limited:1 walsh:1 practical:4 testing:1 atomic:1 practice:4 block:8 implement:1 differs:1 digit:4 area:1 empirical:1 thought:1 matching:3 convenient:1 projection:11 gammon:2 refers:1 radial:1 confidence:1 boyd:1 cannot:1 selection:11 operator:2 put:1 projector:1 demonstrated:2 reviewer:1 straightforward:2 williams:1 starting:1 independently:1 convex:1 focused:1 xuejun:1 insight:4 vandenberghe:1 oh:3 maggioni:2 coordinate:2 variation:1 updated:1 construction:9 play:2 target:1 elucidate:1 mallat:1 duke:1 complementarity:1 element:3 velocity:5 approximated:1 roy:1 role:2 electrical:1 enters:1 episode:11 trade:1 balanced:1 reward:13 littman:2 ultimately:1 ghavamzadeh:3 trained:1 singh:1 solving:3 tight:1 predictive:2 petrik:1 upon:3 creates:1 learner:2 basis:1 completely:1 easily:1 darpa:1 indirect:1 k0:1 represented:10 train:1 distinct:1 forced:2 effective:3 inear:1 choosing:1 whose:2 encoded:9 larger:5 solve:4 ace:1 otherwise:1 encoder:53 think:2 jointly:1 tsp:1 superscript:1 final:1 sequence:1 propose:1 aro:1 combining:1 achieve:2 gold:1 kv:2 convergence:2 produce:1 perfect:2 executing:1 develop:1 demonstrable:1 measured:1 solves:1 skip:1 implies:1 indicate:1 met:1 differ:2 direction:1 subsequently:2 stochastic:1 exploration:1 human:2 material:5 everything:1 require:2 suffices:1 generalization:1 anonymous:1 mab:1 extension:2 great:3 lawrence:1 predict:4 lm:5 parr:15 claim:1 achieves:4 dictionary:1 a2:3 blackjack:5 perceived:1 favorable:1 lose:1 hansen:1 weighted:1 hope:1 mit:1 aim:1 rather:4 shrinkage:1 barto:2 earliest:1 encode:2 zilberstein:1 improvement:4 backgammon:1 indicates:1 tech:1 am:2 helpful:1 typically:1 integrated:1 a0:18 initially:1 hidden:1 selects:1 pixel:1 geramifard:3 multiplies:1 art:1 equal:1 construct:3 having:2 ng:1 represents:5 yu:1 icml:5 carin:1 future:3 report:1 develops:1 pathological:1 randomly:2 familiar:1 huge:1 interest:3 mnih:2 evaluation:2 operated:1 rearrange:1 myopic:1 chain:2 accurate:2 tuple:1 necessary:2 orthogonal:3 taylor:2 re:1 desired:3 guidance:1 theoretical:3 column:2 compelling:1 markovian:2 kxkf:1 cost:1 kq:2 predictor:1 wonder:1 uniform:1 encoders:2 connect:1 aw:1 cited:1 sensitivity:1 automating:1 lee:1 off:2 picking:1 precup:2 connectivity:1 nm:3 satisfied:2 worse:2 expert:1 zhao:1 leading:1 return:2 usable:1 li:2 style:1 account:1 potential:1 converted:1 summarized:1 baird:1 kolter:1 explicitly:1 analyze:1 pendulum:11 doing:2 wave:1 rbfs:3 minimize:2 painter:4 square:4 formed:2 formance:1 largely:1 efficiently:1 variance:1 correspond:3 peril:1 generalize:2 raw:28 fallen:1 trajectory:2 randomness:1 reach:1 aligns:1 definition:3 evaluates:1 against:1 frequency:1 obvious:2 proof:4 conciseness:1 sampled:6 dataset:1 popular:1 maillard:1 back:1 appears:1 supervised:1 improved:1 maximally:1 rand:2 done:4 though:4 evaluated:1 furthermore:1 just:1 angular:5 wakefield:4 until:1 hopefully:1 aj:4 quality:3 perhaps:1 mdp:7 dqn:1 usa:1 effect:2 requiring:1 true:4 verify:1 counterpart:1 regularization:5 hence:2 moore:1 deal:2 game:5 self:1 bowling:2 demonstrate:1 bradtke:1 bring:1 image:31 lagoudakis:3 common:1 absorbing:1 superior:1 machado:1 rl:12 empirically:1 extend:5 interpretation:1 refer:2 composition:1 cambridge:1 tac:1 mathematics:1 teaching:1 predictively:4 had:2 reachable:1 access:3 schoknecht:1 operating:1 base:1 something:2 recent:5 showed:1 discard:1 tesauro:1 onr:1 success:10 rep:1 approximators:1 inverted:5 seen:1 greater:3 somewhat:2 preceding:1 omp:4 additional:5 redundant:1 afterwards:1 desirable:3 multiple:1 ing:1 match:2 a1:4 controlled:5 laplacian:2 prediction:6 scalable:1 regression:3 liao:1 ae:5 sition:1 metric:1 expectation:1 iteration:6 represent:3 achieved:1 szepesv:1 want:1 interval:1 singular:3 appropriately:1 rest:1 comment:1 cart:1 validating:1 effectiveness:1 practitioner:1 extracting:1 door:1 vital:1 mahadevan:2 easy:1 iii:1 architecture:1 lasso:2 restrict:1 competing:1 reduce:1 imperfect:1 motivated:1 gb:1 effort:1 penalty:1 song:1 action:27 matlab:1 deep:15 useful:3 generally:1 clear:1 discount:2 locally:1 reduced:4 generate:2 nsf:1 notice:3 lazaric:1 per:2 tibshirani:1 discrete:1 key:1 rnm:3 threshold:2 changing:1 ram:1 graph:1 asymptotically:1 year:3 sum:1 nga:1 run:5 inverse:1 angle:5 powerful:1 prob:1 master:1 place:1 reasonable:1 draw:1 decision:3 bit:1 layer:6 uncontrolled:3 bound:1 followed:1 guaranteed:1 adapted:1 deb:1 ri:1 span:3 optimality:2 rendered:2 department:2 transiting:1 according:1 combination:2 across:2 slightly:1 shallow:1 making:2 s1:2 explained:1 invariant:1 taken:1 equation:3 resource:1 remains:2 count:1 dyna:1 needed:5 end:2 pursuit:4 apply:4 observe:1 eatures:1 alternative:1 batch:2 existence:1 original:3 denotes:2 assumes:1 worthless:2 ensure:4 top:1 running:1 giving:1 practicality:1 concatenated:1 build:1 classical:1 lspi:3 implied:1 objective:1 question:1 already:1 parametric:1 primary:1 usual:1 diagonal:1 jrssb:1 gradient:1 win:1 subspace:1 distance:1 thank:1 card:1 mapped:1 decoder:13 w0:3 topic:1 argue:1 extent:1 collected:1 reason:1 code:1 relationship:2 nc:1 liang:2 mostly:1 difficult:2 potentially:2 negative:2 implementation:1 policy:34 allowing:1 markov:3 discarded:1 benchmark:1 descent:2 truncated:4 immediate:1 defining:1 persisted:1 discovered:1 sweeping:1 introduced:1 pair:3 required:1 connection:4 rkm:2 learned:4 barcelona:1 hour:1 nip:6 address:1 able:2 suggested:1 beyond:1 below:5 challenge:2 program:2 built:1 interpretability:1 video:3 suitable:2 natural:2 difficulty:1 force:2 predicting:2 indicator:6 regularized:1 residual:3 scarce:1 mdps:2 imply:1 dating:1 sn:2 text:1 prior:1 understanding:5 discovery:1 acknowledgement:1 kf:2 relative:4 expect:1 generation:3 interesting:1 mounted:1 suggestion:1 proven:1 remarkable:2 validation:1 sufficient:11 consistent:1 s0:10 balancing:4 row:2 prone:1 changed:2 surprisingly:3 last:1 transpose:1 repeat:1 supported:1 side:2 bias:1 fall:1 taking:5 munos:1 sparse:2 ghz:1 depth:1 transition:2 rich:3 seemed:1 doesn:1 ferent:1 collection:2 reinforcement:18 made:1 far:1 sj:2 approximate:8 status:1 pseudoinverse:4 uai:2 spatio:1 alternatively:1 grayscale:1 search:2 iterative:2 continuous:2 decade:1 why:1 learn:6 nature:1 robust:1 improving:1 anecdotally:1 constructing:2 domain:3 did:5 significance:1 s2:1 noise:3 profile:1 nothing:1 allowed:2 aamas:1 referred:1 elaborate:1 slow:1 sub:1 explicit:1 concatenating:1 candidate:2 lie:1 rail:1 jmlr:2 extractor:1 northeastern:1 theorem:9 showing:1 explored:2 exists:1 mnist:3 adding:1 importance:3 gained:1 margin:1 durham:1 easier:1 simply:1 penrose:1 kxk:1 trix:1 kae:2 lstd:1 corresponds:2 satisfies:2 lewis:1 conditional:4 goal:1 viewed:1 consequently:1 rbf:2 prioritized:1 change:4 determined:1 typical:2 specifically:3 reducing:1 justify:1 infinite:1 uniformly:1 lemma:1 total:1 svd:3 player:2 wound:1 searched:1 guo:1 proto:1 tested:2 |
5,865 | 6,306 | Online ICA: Understanding Global Dynamics of
Nonconvex Optimization via Diffusion Processes
Chris Junchi Li
Zhaoran Wang
Han Liu
Department of Operations Research and Financial Engineering, Princeton University
{junchil, zhaoran, hanliu}@princeton.edu
Abstract
Solving statistical learning problems often involves nonconvex optimization. Despite the empirical success of nonconvex statistical optimization methods, their
global dynamics, especially convergence to the desirable local minima, remain
less well understood in theory. In this paper, we propose a new analytic paradigm
based on diffusion processes to characterize the global dynamics of nonconvex
statistical optimization. As a concrete example, we study stochastic gradient descent (SGD) for the tensor decomposition formulation of independent component
analysis. In particular, we cast different phases of SGD into diffusion processes,
i.e., solutions to stochastic differential equations. Initialized from an unstable equilibrium, the global dynamics of SGD transit over three consecutive phases: (i) an
unstable Ornstein-Uhlenbeck process slowly departing from the initialization, (ii)
the solution to an ordinary differential equation, which quickly evolves towards the
desirable local minimum, and (iii) a stable Ornstein-Uhlenbeck process oscillating
around the desirable local minimum. Our proof techniques are based upon Stroock
and Varadhan?s weak convergence of Markov chains to diffusion processes, which
are of independent interest.
1
Introduction
For solving a broad range of large-scale statistical learning problems, e.g., deep learning, nonconvex
optimization methods often exhibit favorable computational and statistical efficiency empirically.
However, there is still a lack of theoretical understanding of the global dynamics of these nonconvex
optimization methods. In specific, it remains largely unexplored why simple optimization algorithms,
e.g., stochastic gradient descent (SGD), often exhibit fast convergence towards local minima with desirable statistical accuracy. In this paper, we aim to develop a new analytic framework to theoretically
understand this phenomenon.
The dynamics of nonconvex statistical optimization are of central interest to a recent line of work.
Specifically, by exploring the local convexity within the basins of attraction, [1, 5?8, 10?13, 20?
22, 24?26, 31, 35, 36, 39, 46?58] establish local fast rates of convergence towards the desirable local
minima for a variety statistical problems. Most of these characterizations of local dynamics are based
on two decoupled ingredients from statistics and optimization: (i) the local (approximately) convex
geometry of the objective functions, which is induced by the underlying statistical models, and (ii)
adaptation of classical optimization analysis [19, 34] by incorporating the perturbations induced by
nonconvex geometry as well as random noise. To achieve global convergence guarantees, they rely
on various problem-specific approaches to obtain initializations that provably fall into the basins of
attraction. Meanwhile, for some learning problems, such as phase retrieval and tensor decomposition
for latent variable models, it is empirically observed that good initializations within the basins of
attraction are not essential to the desirable convergence. However, it remains highly challenging to
characterize the global dynamics, especially within the highly nonconvex regions outside the local
basins of attraction.
In this paper, we address this problem with a new analytic framework based on diffusion processes.
In particular, we focus on the concrete example of SGD applied on the tensor decomposition formula30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tion of independent component analysis (ICA). Instead of adapting classical optimization analysis
accordingly to local nonconvex geometry, we cast SGD in different phases as diffusion processes,
i.e., solutions to stochastic differential equations (SDE), by analyzing the weak convergence from
discrete Markov chains to their continuous-time limits [17, 40]. The SDE automatically incorporates
the geometry and randomness induced by the statistical model, which allows us to establish the
exact dynamics of SGD. In contrast, classical optimization analysis only yields upper bounds on
the optimization error, which are unlikely to be tight in the presence of highly nonconvex geometry,
especially around the stationary points that have negative curvatures along certain directions. In
particular, we identify three consecutive phases of the global dynamics of SGD, which is illustrated
in Figure 1.
(i) We consider the most challenging initialization at a stationary point with negative curvatures,
which can be cast as an unstable equilibrium of the SDE. Within the first phase, the dynamics
of SGD are characterized by an unstable Ornstein-Uhlenbeck process [2, 37], which departs
from the initialization at a relatively slow rate and enters the second phase.
(ii) Within the second phase, the dynamics of SGD are characterized by the exact solution to an
ordinary differential equation. This solution evolves towards the desirable local minimum at
a relatively fast rate until it approaches a small basin around the local minimum.
(iii) Within the third phase, the dynamics of SGD are captured by a stable Ornstein-Uhlenbeck
process [2, 37], which oscillates within a small basin around the local minimum.
Local
Minima
Objective
Value
Other
Stationary
Points
(i)
(ii)
(iii)
Time
Figure 1: Left: an illustration of the objective function for the tensor decomposition formulation of
ICA. Note that here we use the spherical coordinate system and add a global offset of 2 to the objective
function for better illustration. Right: An illustration of the three phases of diffusion processes.
More related work. Our results are connected with a very recent line of work [3, 18, 27, 29, 38, 42?
45] on the global dynamics of nonconvex statistical optimization. In detail, they characterize the
global geometry of nonconvex objective functions, especially around their saddle points or local
maxima. Based on the geometry, they prove that specific optimization algorithms, e.g., SGD with
artificial noise injection, gradient descent with random initialization, and second-order methods, avoid
the saddle points or local maxima, and globally converge to the desirable local minima. Among
these results, our results are most related to [18], which considers SGD with noise injection on
ICA. Compared with this line of work, our analysis takes a completely different approach based on
diffusion processes, which is also related to another line of work [14, 15, 30, 32, 33, 41].
Without characterizing the global geometry, we establish the global exact dynamics of SGD, which
illustrate that, even starting from the most challenging stationary point, it may be unnecessary to use
additional techniques such as noise injection, random initialization, and second-order information to
ensure the desirable convergence. In other words, the unstable Ornstein-Uhlenbeck process within
the first phase itself is powerful enough to escape from stationary points with negative curvatures.
This phenomenon is not captured by the previous upper bound-based analysis, since previous upper
bounds are relatively coarse-grained compared with the exact dynamics, which naturally give a sharp
characterization simultaneously from upper and lower bounds. Furthermore, in Section 5 we will
show that our sharp diffusion process-based characterization provides understanding on different
phases of dynamics of our online/SGD algorithm for ICA.
A recent work [29] analyzes an online principal component analysis algorithm based on the intuition
gained from diffusion approximation. In this paper, we consider a different statistical problem with a
rigorous characterization of the diffusion approximations in three separate phases.
Our contribution. In summary, we propose a new analytic paradigm based on diffusion processes
for characterizing the global dynamics of nonconvex statistical optimization. For SGD on ICA, we
identify the aforementioned three phases for the first time. Our analysis is based on Stroock and
Varadhan?s weak convergence of Markov chains to diffusion processes, which are of independent
interest.
2
2
Background
In this section we formally introduce a special model of independent component analysis (ICA) and
the associated SGD algorithm. Let {X (i) }ni=1 be the data sample identically distributed as X 2 Rd .
We make assumptions for the distribution of X as follows. Let k ? k be the `2 -norm of a vector.
Assumption 1. There is an orthonormal matrix A 2 Rd?d such that X = AY , where Y 2 Rd is a
random vector that has independent entries satisfying the following conditions:
(i) The distribution of each Yi is symmetric about 0;
(ii) There is a constant B such that kY k2 ? B;
(iii) The Y1 , . . . , Yd are independent with identical m moments for m ? 8, denoted by m ? EY1m ;
(iv) The 1 = EYi = 0, 2 = EYi2 = 1, ? 4 6= 3.
Assumption 1(iii) above is a generalization of i.i.d. tensor components. Let A = (a1 , . . . , ad ) whose
columns form an orthonormal basis. Our goal is to estimate the orthonormal basis ai from online
data X1 , . . . , Xn . We first establish a preliminary lemma.
Lemma 1. Let T = E(X ?4 ) be the 4th-order tensor whose (i, j, k, l)-entry is E (Xi Xj Xk Xl ).
Under Assumption 1, we have
T(u, u, u, u) ? E u> X
4
=3+(
3)
d
X
4
(a>
i u) .
(2.1)
i=1
Lemma 1 implies that finding ai ?s can be cast into the solution to the following population optimization problem
argmin
sign(
3) ? E u> X
4
= argmin
d
X
i=1
4
(a>
i u)
subject to kuk = 1.
(2.2)
It is straightforward to conclude that all stable equilibria of (2.2) are ?ai whose number linearly
grows with d. Meanwhile, by analyzing the Hessian matrices the set of unstable equilibria of (2.2)
includes (but not limited to) all v? = d 1/2 (?1, ? ? ? , ?1), whose number grows exponentially as d
increases [18, 44].
Now we introduce the SGD algorithm for solving (2.2) with finite samples. Let S d 1 = {u : kuk =
1} be the unit sphere in Rd , and denote ?u = u/kuk for u 6= 0 the projection operator onto S d 1 .
With appropriate initialization, the SGD for tensor method iteratively updates the estimator via the
following Eq. (2.3):
?
?
?3
u(n) = ? u(n 1) + sign(
3) ?
u(n 1) > X (n) X (n) .
(2.3)
The SGD algorithms that performs stochastic approximation using single online data sample in
each update has the advantage of less temporal and spatial complexity, especially when d is high
[18, 29]. An essential issue of this nonconvex optimization problem is how the algorithm escape from
unstable equilibria. [18] provides a method of adding artificial noises to the samples, where the noise
variables are uniformly sampled from S d 1 . In our work, we demonstrate that under some reasonable
distributional assumptions, the online data provide sufficient noise for the algorithm to escape from
the unstable equilibria.
By symmetry, our algorithm in Eq. (2.3) converges to a uniformly random tensor component from d
components. In order to solve the problem completely, one can repeatedly run the algorithm using
different set of online samples until all tensor components are found. In the case where d is high, the
well-known coupon collector problem [16] implies that it takes ? d log d runs of SGD algorithm to
obtain all d tensor components.
Remark. From Eq. (2.2) we see the tensor structure in Eq. (2.1) is unidentifiable in the case of
= 3, see more discussion in [4, 18]. Therefore in Assumption 1 we rule out the value = 3
and call the value |
3| the tensor gap. The reader will see later that, analogous to eigengap in
SGD algorithm for principal component analysis (PCA) [29], tensor gap plays a vital role in the time
complexity in the algorithm analysis.
3
Markov Processes and Differential Equation Approximation
To work on the approximation we first conclude the following proposition.
3
Proposition 1. The iteration u(n) , n = 0, 1, . . . generated by Eq. (2.3) forms a discrete-time, timehomogeneous Markov process that takes values on S d 1 . Furthermore, u(n) holds strong Markov
property.
For convenience of analysis we use the transformed iteration v(n) ? A> u(n) in the rest of this paper.
The update equation in Eq. (2.3) is equivalently written as
?
?
?3
(n)
> (n)
v = A u = ? A> u(n 1) ?
u(n 1) > AA> X (n) A> X (n)
?
(3.1)
?
?3
(n 1)
(n 1) > (n)
(n)
=? v
?
v
Y
Y
.
Here ? has the same sign with
3. It is obvious from Proposition 1 that the (strong) Markov property applies to v(n) , and one can analyze the iterates v(n) generated by Eq. (3.1) from a perspective
of Markov processes.
Our next step is to conclude that as the stepsize ! 0+ , the iterates generated by Eq. (2.3), under
the time scaling that speeds up the algorithm by a factor 1 , can be globally approximated by the
solution to the following ODE system. To characterize such approximation we use theory of weak
convergence to diffusions [17, 40] via computing the infinitesimal mean and variance for SGD for the
tensor method. We remind the readers of the definition of weak convergence Z ) Z in stochastic
processes: for any 0 ? t1 < t2 < ? ? ? < tn the following convergence in distribution occurs as
! 0+
d
Z (t1 ), Z (t2 ), . . . , Z (tn ) ! (Z(t1 ), Z(t2 ), . . . , Z(tn )) .
To highlight the dependence on we add it in the superscipts of iterates v
bt 1 c is the integer part of the real number t 1 .
Theorem 1. If for each k = 1, . . . , d, as
Vko
then the Markov process vk
,(bt
dVk
=|
dt
1
c)
! 0+ vk
,(0)
,(n)
= v(n) . Recall that
converges weakly to some constant scalar
converges weakly to the solution of the ODE system
!
d
X
2
4
3| Vk Vk
Vi ,
k = 1, . . . , d,
(3.2)
i=1
with initial values Vk (0) = Vko .
To understand the complex ODE system in Eq. (3.2) we first investigate into the case of d = 2.
Consider a change of variable V12 (t) we have by chain rule in calculus and V22 = 1 V12 the
following derivation:
dV12
dV1
= 2V1 ?
= 2V1 ? |
3| V1 V12 V14 V24
dt
dt
?
?
1
= 2|
3| V12 V12 V14 (1 V12 )2 = 2 |
3| V12 V12
(V12 1). (3.3)
2
Eq. (3.3) is an autonomous, first-order ODE for V12 . Although this equation is complex, a closed-form
solution is available:
V12 (t) = 0.5 ? 0.5(1 + C exp ( |
3|t)) 0.5 ,
and V22 (t) = 1 V12 (t), where the choices of ? and C depend on the initial value. The above
solution allows us to conclude that if the initial vector (V1o )2 < (V2o )2 (resp. (V1o )2 > (V2o )2 ),
then it approaches to 1 (resp. 0) as t ! 1. This intuition can be generalized to the case of higher
d that the ODE system in Eq. (3.2) converges to the coordinate direction ?ek if (Vko )2 is strictly
maximal among (V1o )2 , . . . , (Vdo )2 in the initial vector. To estimate the time of traverse we establish
the following Proposition 2.
Proposition 2. Fix 2 (0, 1/2) and the initial value Vk (0) = Vko that satisfies (Vko0 )2 2(Vko )2 for
all 1 ? k ? d, k 6= k0 , then there is a constant (called traverse time) T that depends only on d,
such that Vk20 (T ) 1
. Furthermore T has the following upper bound: let y(t) solution to the
following auxillary ODE
dy
= y 2 (1 y) ,
(3.4)
dt
with y(0) = 2/(d + 1). Let T0 be the time that y(T0 ) = 1
. Then
T ?|
3|
1
T0 ? |
3|
4
1
d
3 + 4 log(2 )
1
.
(3.5)
Proposition 2 concludes that, by admitting a gap of 2 between the largest (Vko0 )2 and second largest
(Vko )2 , k 6= k0 the estimate on traverse time can be given, which is tight enough for our purposes in
Section 5.
Remark. In an earlier paper [29] which focuses on the SGD algorithm for PCA, when the stepsize is
small, the algorithm iteration is approximated by the solution to ODE system after appropriate time
rescaling. The approximate ODE system for SGD for PCA is
dVk
=
dt
2Vk
d
X
(
2
i )Vi ,
k
k = 1, . . . , d.
(3.6)
i=1
The analysis there also involves computation of infinitesimal mean and variance for each coordinate
as the stepsize ! 0+ and theory of convergence to diffusions [17, 40]. A closed-form solution to
Eq. (3.6) is obtained in [29], called the generalized logistic curves. In contrast, to our best knowledge
a closed-form solution to Eq. (3.2) is generally not available.
4
Local Approximation via Stochastic Differential Equation
The ODE approximation in Section 3 is very informative: it characterizes globally the trajectory of
our algorithm for ICA or tensor method in Eq. (2.3) with O(1) approximation errors. However it
fails to characterize the behavior near equilibria where the gradients in our ODE system are close to
zero. For instance, if the SGD algorithm starts from v? , on a microscopic magnitude of O( 1/2 ) the
noises generated by online samples help escaping from a neighborhood of v? .
Our main goal in this section is to demonstrate that under appropriate spatial and temporal scalings,
the algorithm iteration converges locally to the solution to certain stochastic differential equations
(SDE). We provide the SDE approximations in two scenarios, separately near an arbitrary tensor
component (Subsection 4.1) which indicates that our SGD for tensor method converges to a local
minimum at a desirable rate, and a special local maximum (Subsection 4.2) which implies that the
stochastic nature of our SGD algorithm for tensor method helps escaping from unstable equilibria.
Note that in the algorithm iterates, the escaping from stationary points occurs first, followed by the
ODE and then by the phase of convergence to local minimum. We discuss this further in Section 5.
4.1
Neighborhood of Local Minimizers
To analyze the behavior of SGD for tensor method we first consider the case where the iterates enter
a neighborhood of one local minimizer, i.e. the tensor component. Since the tensor decomposition
in Eq. (2.2) is full-rank and symmetric, we consider without loss of generality the neighborhood
near e1 the first tensor component. The following Theorem 2 indicates that under appropriate spatial
and temporal scalings, the process admits an approximation by Ornstein-Uhlenbeck process. Such
approximation is characterized rigorously using weak convergence theory of Markov processes
[17, 40]. The readers are referred to [37] for fundamental topics on SDE.
Theorem 2. If for each k = 2, . . . , d,
,(bt
1/2
1
vk
,(0)
c)
converges weakly to Uko 2 (0, 1) as
! 0+
1/2
then the stochastic process
vk
converges weakly to the solution of the stochastic
differential equation
1/2
dUk (t) = |
3| Uk (t)dt + 6 dBk (t),
(4.1)
o
with initial values Uk (0) = Uk . Here Bk (t) is a standard one-dimensional Brownian motion.
We identify the solution to Eq. (4.1) as an Ornstein-Uhlenbeck process which can be expressed in
terms of a It? integral, with
Z t
1/2
o
Uk (t) = Uk exp ( |
3|t) + 6
exp ( |
3|(t s)) dBk (s).
(4.2)
0
It? isometry along with mean-zero property of It? integral gives
Z t
E(Uk (t))2 = (Uko )2 exp ( 2|
3|t) + 6
exp ( 2|
3|(t s)) ds
0
?
?
6
6
=
+ (Uko )2
exp ( 2|
3|t) ,
2|
3|
2|
3|
which, by taking the limit t ! 1, approaches 6 /(2|
3|). From the above analysis we conclude that the Ornstein-Uhlenbeck process has the mean-reverting property that its mean decays
exponentially towards 0 with persistent fluctuations at equilibrium.
5
4.2
Escape from Unstable Equilibria
In this subsection we consider SGD for tensor method that starts from a sufficiently small neighborhood of a special unstable equilibrium. We show that after appropriate rescalings of both time and
space, the SGD for tensor iteration can be approximated by the solution to a second SDE. Analyzing
the approximate SDE suggests that our SGD algorithm iterations can get rid of the unstable equilibria
(including local maxima and stationary points with negative curvatures) whereas the traditional
gradient descent (GD) method gets stuck. In other words, under weak distributional assumptions the
stochastic gradient plays a vital role that helps the escape. As a illustrative example, we consider the
special stationary points v? = d 1/2 (?1, . . . , ?1). Consider a submanifold SF ? S d 1 where
SF = v 2 S d
1
: there exists 1 ? k < k 0 ? d such that vk2 = vk20 = max1?i?d vi2 .
In words, SF consists of all v 2 S d 1 where the maximum of vk2 is not unique. In the case of d = 3,
it is illustrated by Figure 1 that SF is the frame of a 3-dimenisional box, and hence we call SF the
frame. Let
,(bt 1 c) 2
,(bt 1 c) 2
1/2
1/2
Wkk0 (t) =
log vk
log vk0
,
(4.3)
The reason we study Wkk0 (t) is that these d(d 1) functions of v 2 S d 1 form a local coordinate
map around v? and further characterize the distance between v and SF on a spatial scale of 1/2 . We
define the positive constant ?d, as
?2d, = 8d
2
8
+ (16d
28)
6
+ 15d
2
4
5(72d2 228d + 175) 4 + 15(2d
We have our second SDE approximation result as follows.
7)(d
2)(d
3) .
(4.4)
Theorem 3. Let Wkk0 (t) be defined as in Eq. (4.3), and let ?d, be as in Eq. (4.4). If for each distinct
o
k, k 0 = 1, . . . , d, Wkk0 (0) converges weakly to Wkk
! 0+ then the stochastic
0 2 (0, 1) as
process Wkk0 (t) converges weakly to the solution of the stochastic differential equation
2|
3|
dWkk0 (t) =
Wkk0 (t)dt + ?d, dBkk0 (t)
(4.5)
d
o
with initial values Wkk0 (0) = Wkk
0 . Here Bkk 0 (t) is a standard one-dimensional Brownian motion.
We can solve Eq. (4.5) and obtain an unstable Ornstein-Uhlenbeck process as
?
?
?
?
?
?
Z t
2|
3|
2|
3|
o
Wkk0 (t) = Wkk
exp
s dBkk0 (s) exp
t .
(4.6)
0 + ?d,
d
d
0
Let Ckk0 be defined as
?
?
Z 1
4|
3|
o
Ckk0 ? Wkk
+
?
exp
s
dBkk0 (s).
(4.7)
0
d,
d
0
We conclude that the following holds.
o
2
(i) Ckk0 is a normal variable with mean Wkk
3|);
0 and variance d?d, / (4 |
(ii) When t is large Wkk0 (t) has the following approximation
?
?
2|
3|
Wkk0 (t) ? Ckk0 exp
t .
(4.8)
d
To verify (i) above we have the It? integral in Eq. (4.6)
?
?
?
?
Z 1
2|
3|
E ?d,
exp
s dBkk0 (s) = 0,
d
0
and by using It? isometry
?
?
?
?2
?
?
Z 1
Z t
4|
3|
2|
3|
2
E ?d,
exp
s dBkk0 (s) = ?d,
exp
s ds
d
d
0
0
?
?
Z 1
d?2d,
4|
3|
? ?2d,
exp
s ds =
.
d
4|
3|
0
The analysis above on the unstable Ornstein-Uhlenbeck process indicates that the process has the
momentum nature that when t is large, it can be regarded as at a normally distributed location
centered at 0 and grows exponentially. In Section 5 we will see how the result in Theorem 3 provides
explanation on the escape from unstable equilibria.
6
5
Phase Analysis
In this section, we utilize the weak convergence results in Sections 3 and 4 to understand the dynamics
of online ICA in different phases. For purposes of illustration and brevity, we restrict ourselves to the
case of starting point v? , a local maxima that has negative curvatures in every direction. In below we
denote by Z ? W as ! 0+ when the limit of ratio Z /W ! 1.
o
Phase I (Escape from unstable equilibria). Assume we start from v? , then Wkk
0 = 0 for all
0
k 6= k . We have from Eqs. (4.6) and (4.7) that
!
!1/2
?
?
(n) 2
2
d?
vk
2|
3|
d,
1/2
log
=
Wkk0 (n ) ?
? n .
(5.1)
kk0 exp
(n)
4|
3|
d
vk 0
?
?2
?
?2
(N )
(N )
Suppose k1 is the index that maximizes vk 1
and k2 maximizes vk 1
, k 6= k1 . Then by
Eq. (5.1) we know k1 k2 is positive. By setting
?
?2
?
?2
(N )
(N )
log vk1 1
log vk2 1
= log 2,
we have from the construction in the proof of Theorem 3 that as ! 0+
0
1
! 1/2
2
d?
1
d,
1
1
A ? 1|
N1 = |
3| d 1 log @
k1 k2 log 2
2
4|
3|
4
3|
1
d
1
1
log
.
Phase II (Deterministic traverse). By (strong) Markov property we can restart the counter of
iteration, we have the max and second max
?
?2
?
?2
(0)
(0)
v k1
= 2 v k2
,
Proposition 2 implies that it takes time
T ?|
3|
1
d
3 + 4 log(2 )
1
,
for the ODE to traverse from
= 2/(d + 1) =
for k > 1. Converting to the timescale of the
SGD, the second phase has the following relations as ! 0+
V12
1
N2 ? T
?|
2Vk2
3|
1
d
3 + 4 log(2 )
1
1
.
Phase III (Convergence to stable equilibria). Again restart our counter. We have from the approximation in Theorem 3 and Eq. (4.2) that
Z n
(n) 2
(0) 2
E(vk ) = (vk ) exp ( 2|
3| n) + 6
exp ( 2|
3|(t s)) ds
?
? 0
6
6
(0)
=
+ (vk )2
exp ( 2 |
3|n) .
2|
3|
2|
3|
Pd
In terms of the iterations v(n) , note the relationship E sin2 \(v, e1 ) = k=2 vk2 = 1 v12 . The end
of ODE phase implies that E sin2 \(v(0) , e1 ) = , and hence
?
?
(d 1) 6
(d 1) 6
2
(n)
E sin \(v , e1 ) =
+
exp ( 2 |
3|n) .
2|
3|
2|
3|
By setting
E sin2 \(v(N3 ) , e1 ) = (C0 + 1) ?
we conclude that as ! 0+
?
1
N3 =
log
2 |
3|
6
1
?
2|
3|
C0 (d
(d
1)
Summary and discussions
1)
6
6
(d
2|
?
?
1) 6
,
3|
1
|
2
3|
1
1
log
1
.
In this paper, we take online ICA as a first step towards understanding the global dynamics of stochastic
gradient descent. For general nonconvex optimization problems such as training deep networks, phaseretrieval, dictionary learning and PCA, we expect similar multiple-phase phenomenon. It is believed
7
that the flavor of asymptotic analysis above can help identify a class of stochastic algorithms for
nonconvex optimization with statistical structure.
Our continuous-time analysis also reflects the dynamics of the algorithm in discrete time. This is
substantiated by Theorems 1, 2 and 3 which rigorously characterize the convergence of iterates to
ODE or SDE by shifting to different temporal and spatial scales. In detail, our results imply when
! 0+ :
Phase I takes iteration number N1 ? (1/4)|
3| 1 d ? 1 log( 1 );
1
Phase II takes iteration number N2 ? |
3| d ? 1 ;
Phase III takes iteration number N3 ? (1/2)|
3| 1 ? 1 log( 1 ).
1/2
After the three phases, the iteration reaches a point that is C ? 6 |
3| 1 ? d
distant on
+
average to one local minimizer. As ! 0 we have N2 /N1 ! 0. This implies that the algorithm
demonstrates the cutoff phenomenon which frequently occur in discrete-time Markov processes [28,
Chap. 18]. In words, the Phase II where the objective value in Eq. (2.2) drops from 1 " to " is a
short-time phase compared to Phases I and III, so the convergence curve illustrated in the right figure
in Figure 1 instead of an exponentially decaying curve. As ! 0+ we have N1 /N3 ? d/2, which
suggests that Phase I of escaping from unstable equlibria dominates Phase III by a factor of d/2.
References
[1] Agarwal, A., Anandkumar, A., Jain, P. and Netrapalli, P. (2013). Learning sparsely used overcomplete
dictionaries via alternating minimization. arXiv preprint arXiv:1310.7991.
[2] Aldous, D. (1989). Probability approximations via the Poisson clumping heuristic. Applied Mathematical
Sciences, 77.
[3] Anandkumar, A. and Ge, R. (2016). Efficient approaches for escaping higher order saddle points in
non-convex optimization. arXiv preprint arXiv:1602.05908.
[4] Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M. and Telgarsky, M. (2014). Tensor decompositions for
learning latent variable models. Journal of Machine Learning Research, 15 2773?2832.
[5] Anandkumar, A., Ge, R. and Janzamin, M. (2014). Analyzing tensor power method dynamics in overcomplete regime. arXiv preprint arXiv:1411.1488.
[6] Arora, S., Ge, R., Ma, T. and Moitra, A. (2015). Simple, efficient, and neural algorithms for sparse coding.
arXiv preprint arXiv:1503.00778.
[7] Balakrishnan, S., Wainwright, M. J. and Yu, B. (2014). Statistical guarantees for the EM algorithm: From
population to sample-based analysis. arXiv preprint arXiv:1408.2156.
[8] Bhojanapalli, S., Kyrillidis, A. and Sanghavi, S. (2015). Dropping convexity for faster semi-definite optimization. arXiv preprint arXiv:1509.03917.
[9] Bronshtein, I. N. and Semendyayev, K. A. (1998). Handbook of mathematics. Springer.
[10] Cai, T. T., Li, X. and Ma, Z. (2015). Optimal rates of convergence for noisy sparse phase retrieval via
thresholded Wirtinger flow. arXiv preprint arXiv:1506.03382.
[11] Cand?s, E., Li, X. and Soltanolkotabi, M. (2014). Phase retrieval via Wirtinger flow: Theory and algorithms.
arXiv preprint arXiv:1407.1065.
[12] Chen, Y. and Cand?s, E. (2015). Solving random quadratic systems of equations is nearly as easy as
solving linear systems. In Advances in Neural Information Processing Systems.
[13] Chen, Y. and Wainwright, M. J. (2015). Fast low-rank estimation by projected gradient descent: General
statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025.
[14] Darken, C. and Moody, J. (1991). Towards faster stochastic gradient search. In Advances in Neural
Information Processing Systems.
[15] De Sa, C., Olukotun, K. and R?, C. (2014). Global convergence of stochastic gradient descent for some
non-convex matrix problems. arXiv preprint arXiv:1411.1134.
[16] Durrett, R. (2010). Probability: Theory and examples. Cambridge University Press.
[17] Ethier, S. N. and Kurtz, T. G. (1985). Markov processes: Characterization and convergence, vol. 282.
John Wiley & Sons.
[18] Ge, R., Huang, F., Jin, C. and Yuan, Y. (2015). Escaping from saddle points ? online stochastic gradient
for tensor decomposition. arXiv preprint arXiv:1503.02101.
[19] Golub, G. H. and Van Loan, C. F. (2012). Matrix computations. JHU Press.
[20] Gu, Q., Wang, Z. and Liu, H. (2014). Sparse PCA with oracle property. In Advances in neural information
processing systems.
[21] Gu, Q., Wang, Z. and Liu, H. (2016). Low-rank and sparse structure pursuit via alternating minimization.
In International Conference on Artificial Intelligence and Statistics.
[22] Hardt, M. (2014). Understanding alternating minimization for matrix completion. In Foundations of
Computer Science.
[23] Hirsch, M. W., Smale, S. and Devaney, R. L. (2012). Differential equations, dynamical systems, and an
introduction to chaos. Academic Press.
8
[24] Jain, P., Jin, C., Kakade, S. M. and Netrapalli, P. (2015). Computing matrix squareroot via non convex
local search. arXiv preprint arXiv:1507.05854.
[25] Jain, P. and Netrapalli, P. (2014). Fast exact matrix completion with finite samples. arXiv preprint
arXiv:1411.1087.
[26] Jain, P., Netrapalli, P. and Sanghavi, S. (2013). Low-rank matrix completion using alternating minimization.
In Symposium on Theory of Computing.
[27] Lee, J. D., Simchowitz, M., Jordan, M. I. and Recht, B. (2016). Gradient descent converges to minimizers.
arXiv preprint arXiv:1602.04915.
[28] Levin, D. A., Peres, Y. and Wilmer, E. L. (2009). Markov chains and mixing times. American Mathematical
Society.
[29] Li, C. J., Wang, M., Liu, H. and Zhang, T. (2016). Near-optimal stochastic approximation for online
principal component estimation. arXiv preprint arXiv:1603.05305.
[30] Li, Q., Tai, C. et al. (2015). Dynamics of stochastic gradient algorithms. arXiv preprint arXiv:1511.06251.
[31] Loh, P.-L. and Wainwright, M. J. (2015). Regularized M -estimators with nonconvexity: Statistical and
algorithmic theory for local optima. Journal of Machine Learning Research, 16 559?616.
[32] Mandt, S., Hoffman, M. D. and Blei, D. M. (2016). A variational analysis of stochastic gradient algorithms.
arXiv preprint arXiv:1602.02666.
[33] Mobahi, H. (2016). Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114.
[34] Nesterov, Y. (2004). Introductory lectures on convex optimization: A basic course, vol. 87. Springer.
[35] Netrapalli, P., Jain, P. and Sanghavi, S. (2013). Phase retrieval using alternating minimization. In Advances
in Neural Information Processing Systems.
[36] Netrapalli, P., Niranjan, U., Sanghavi, S., Anandkumar, A. and Jain, P. (2014). Non-convex robust pca. In
Advances in Neural Information Processing Systems.
[37] Oksendal, B. (2003). Stochastic differential equations. Springer.
[38] Panageas, I. and Piliouras, G. (2016). Gradient descent converges to minimizers: The case of non-isolated
critical points. arXiv preprint arXiv:1605.00405.
[39] Qu, Q., Sun, J. and Wright, J. (2014). Finding a sparse vector in a subspace: Linear sparsity using alternating directions. In Advances in Neural Information Processing Systems.
[40] Stroock, D. W. and Varadhan, S. S. (1979). Multidimensional diffusion processes, vol. 233. Springer.
[41] Su, W., Boyd, S. and Cand?s, E. (2014). A differential equation for modeling Nesterov?s accelerated
gradient method: Theory and insights. In Advances in Neural Information Processing Systems.
[42] Sun, J., Qu, Q. and Wright, J. (2015). Complete dictionary recovery over the sphere i: Overview and the
geometric picture. arXiv preprint arXiv:1511.03607.
[43] Sun, J., Qu, Q. and Wright, J. (2015). Complete dictionary recovery over the sphere ii: Recovery by
Riemannian trust-region method. arXiv preprint arXiv:1511.04777.
[44] Sun, J., Qu, Q. and Wright, J. (2015). When are nonconvex problems not scary? arXiv preprint
arXiv:1510.06096.
[45] Sun, J., Qu, Q. and Wright, J. (2016). A geometric analysis of phase retrieval. arXiv preprint
arXiv:1602.06664.
[46] Sun, R. and Luo, Z.-Q. (2015). Guaranteed matrix completion via nonconvex factorization. In Foundations
of Computer Science.
[47] Sun, W., Lu, J., Liu, H. and Cheng, G. (2015). Provable sparse tensor decomposition. arXiv preprint
arXiv:1502.01425.
[48] Sun, W., Wang, Z., Liu, H. and Cheng, G. (2015). Non-convex statistical optimization for sparse tensor
graphical model. In Advances in Neural Information Processing Systems 28.
[49] Tan, K. M., Wang, Z., Liu, H. and Zhang, T. (2016). Sparse generalized eigenvalue problem: Optimal
statistical rates via truncated rayleigh flow. arXiv preprint arXiv:1604.08697.
[50] Tu, S., Boczar, R., Soltanolkotabi, M. and Recht, B. (2015). Low-rank solutions of linear matrix equations
via procrustes flow. arXiv preprint arXiv:1507.03566.
[51] Wang, Z., Gu, Q., Ning, Y. and Liu, H. (2015). High dimensional EM algorithm: Statistical optimization
and asymptotic normality. In Advances in Neural Information Processing Systems.
[52] Wang, Z., Liu, H. and Zhang, T. (2014). Optimal computational and statistical rates of convergence for
sparse nonconvex learning problems. Annals of statistics, 42 2164.
[53] Wang, Z., Lu, H. and Liu, H. (2014). Nonconvex statistical optimization: Minimax-optimal sparse PCA in
polynomial time. arXiv preprint arXiv:1408.5352.
[54] White, C. D., Sanghavi, S. and Ward, R. (2015). The local convexity of solving systems of quadratic
equations. arXiv preprint arXiv:1506.07868.
[55] Yang, Z., Wang, Z., Liu, H., Eldar, Y. C. and Zhang, T. (2015). Sparse nonlinear regression: Parameter
estimation and asymptotic inference under nonconvexity. arXiv preprint arXiv:1511.04514.
[56] Zhang, Y., Chen, X., Zhou, D. and Jordan, M. I. (2014). Spectral methods meet em: A provably optimal
algorithm for crowdsourcing. In Advances in neural information processing systems.
[57] Zhao, T., Wang, Z. and Liu, H. (2015). A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural Information Processing Systems.
[58] Zheng, Q. and Lafferty, J. (2015). A convergent gradient descent algorithm for rank minimization and
semidefinite programming from random linear measurements. arXiv preprint arXiv:1506.06081.
9
| 6306 |@word polynomial:1 norm:1 c0:2 calculus:1 d2:1 decomposition:8 sgd:33 v2o:2 moment:1 initial:7 liu:12 luo:1 written:1 john:1 distant:1 informative:1 analytic:4 drop:1 update:3 clumping:1 stationary:8 intelligence:1 accordingly:1 xk:1 short:1 blei:1 characterization:5 coarse:1 provides:3 iterates:6 traverse:5 location:1 zhang:5 mathematical:2 along:2 differential:12 symposium:1 persistent:1 yuan:1 prove:1 consists:1 introductory:1 introduce:2 theoretically:1 ica:10 behavior:2 cand:3 frequently:1 globally:3 spherical:1 chap:1 automatically:1 junchi:1 spain:1 underlying:1 maximizes:2 bhojanapalli:1 kk0:1 sde:10 argmin:2 finding:2 guarantee:3 temporal:4 unexplored:1 every:1 multidimensional:1 oscillates:1 k2:5 demonstrates:1 uk:6 unit:1 normally:1 kurtz:1 t1:3 positive:2 engineering:1 local:31 understood:1 limit:3 despite:1 analyzing:4 meet:1 fluctuation:1 mandt:1 approximately:1 yd:1 initialization:8 suggests:2 challenging:3 limited:1 factorization:1 range:1 unique:1 definite:1 empirical:1 auxillary:1 jhu:1 adapting:1 projection:1 boyd:1 word:4 get:2 onto:1 convenience:1 close:1 operator:1 map:1 deterministic:1 straightforward:1 starting:2 convex:7 recovery:3 estimator:2 attraction:4 rule:2 regarded:1 orthonormal:3 insight:1 financial:1 ckk0:4 population:2 coordinate:4 autonomous:1 analogous:1 resp:2 construction:1 play:2 suppose:1 tan:1 exact:5 annals:1 programming:1 boczar:1 satisfying:1 approximated:3 sparsely:1 distributional:2 observed:1 role:2 preprint:30 wang:11 enters:1 region:2 connected:1 sun:8 counter:2 intuition:2 pd:1 convexity:3 complexity:2 wkk0:11 nesterov:2 rigorously:2 dynamic:23 weakly:6 solving:6 tight:2 depend:1 upon:1 max1:1 efficiency:1 completely:2 basis:2 gu:3 v22:2 k0:2 various:1 derivation:1 substantiated:1 distinct:1 fast:5 jain:6 artificial:3 outside:1 neighborhood:5 whose:4 heuristic:1 devaney:1 solve:2 statistic:3 ward:1 timescale:1 itself:1 noisy:1 online:12 advantage:1 eigenvalue:1 cai:1 simchowitz:1 propose:2 maximal:1 adaptation:1 tu:1 mixing:1 achieve:1 ky:1 convergence:23 optimum:1 oscillating:1 telgarsky:1 converges:12 help:4 illustrate:1 develop:1 completion:4 recurrent:1 sa:1 strong:3 netrapalli:6 scary:1 eq:24 involves:2 implies:6 direction:4 ning:1 stochastic:23 centered:1 fix:1 generalization:1 preliminary:1 proposition:7 exploring:1 strictly:1 hold:2 around:6 sufficiently:1 wright:5 normal:1 exp:19 equilibrium:15 algorithmic:2 dictionary:4 consecutive:2 purpose:2 favorable:1 estimation:4 largest:2 reflects:1 hoffman:1 minimization:6 aim:1 avoid:1 zhou:1 focus:2 vk:17 rank:7 indicates:3 contrast:2 rigorous:1 vk2:5 sin2:3 inference:1 minimizers:3 unlikely:1 bt:5 relation:1 transformed:1 provably:2 issue:1 among:2 aforementioned:1 eldar:1 denoted:1 spatial:5 special:4 identical:1 broad:1 yu:1 nearly:1 t2:3 sanghavi:5 escape:7 simultaneously:1 phase:36 geometry:8 ourselves:1 n1:4 interest:3 highly:3 investigate:1 zheng:1 golub:1 admitting:1 semidefinite:1 chain:5 integral:3 janzamin:1 decoupled:1 iv:1 initialized:1 v1o:3 overcomplete:2 theoretical:1 isolated:1 instance:1 column:1 earlier:1 modeling:1 stroock:3 ordinary:2 entry:2 submanifold:1 levin:1 characterize:7 gd:1 recht:2 fundamental:1 international:1 lee:1 quickly:1 concrete:2 moody:1 again:1 central:1 moitra:1 huang:1 slowly:1 ek:1 american:1 zhao:1 rescaling:1 li:5 de:1 zhaoran:2 coding:1 includes:1 eyi:1 ornstein:10 ad:1 vi:2 tion:1 later:1 depends:1 closed:3 analyze:2 characterizes:1 start:3 decaying:1 contribution:1 ni:1 accuracy:1 variance:3 largely:1 yield:1 identify:4 bronshtein:1 weak:8 lu:2 trajectory:1 randomness:1 reach:1 definition:1 infinitesimal:2 obvious:1 naturally:1 proof:2 associated:1 riemannian:1 sampled:1 hsu:1 hardt:1 recall:1 knowledge:1 subsection:3 higher:2 dt:7 formulation:2 unidentifiable:1 box:1 generality:1 furthermore:3 until:2 d:4 trust:1 su:1 nonlinear:1 lack:1 dvk:2 logistic:1 grows:3 verify:1 hence:2 alternating:6 symmetric:2 iteratively:1 illustrated:3 wkk:6 white:1 sin:1 illustrative:1 generalized:3 ay:1 complete:2 demonstrate:2 tn:3 performs:1 motion:2 variational:1 chaos:1 empirically:2 overview:1 exponentially:4 measurement:1 v14:2 cambridge:1 ai:3 enter:1 dv1:1 rd:4 mathematics:1 ethier:1 varadhan:3 soltanolkotabi:2 phaseretrieval:1 dbk:2 stable:4 han:1 add:2 curvature:5 brownian:2 isometry:2 recent:3 perspective:1 aldous:1 scenario:1 certain:2 nonconvex:22 success:1 yi:1 captured:2 minimum:12 additional:1 analyzes:1 converting:1 converge:1 paradigm:2 ii:10 semi:1 full:1 desirable:10 multiple:1 faster:2 characterized:3 academic:1 believed:1 sphere:3 retrieval:5 e1:5 niranjan:1 a1:1 basic:1 regression:1 poisson:1 arxiv:60 iteration:12 uhlenbeck:10 agarwal:1 background:1 whereas:1 separately:1 ode:14 rescalings:1 rest:1 oksendal:1 induced:3 subject:1 balakrishnan:1 lafferty:1 incorporates:1 flow:4 jordan:2 call:2 integer:1 anandkumar:5 near:4 yang:1 presence:1 wirtinger:2 iii:9 enough:2 identically:1 vital:2 variety:1 xj:1 easy:1 restrict:1 escaping:6 kyrillidis:1 t0:3 pca:7 eigengap:1 loh:1 hessian:1 repeatedly:1 remark:2 deep:2 generally:1 procrustes:1 locally:1 sign:3 panageas:1 discrete:4 dropping:1 vol:3 cutoff:1 kuk:3 thresholded:1 diffusion:17 utilize:1 nonconvexity:2 v1:3 olukotun:1 run:2 powerful:1 reasonable:1 reader:3 v12:14 squareroot:1 dy:1 scaling:3 bound:5 followed:1 guaranteed:1 convergent:1 cheng:2 quadratic:2 oracle:1 occur:1 n3:4 speed:1 injection:3 relatively:3 department:1 vk0:1 remain:1 em:3 son:1 kakade:2 qu:5 evolves:2 equation:17 remains:2 tai:1 discus:1 reverting:1 know:1 ge:5 end:1 available:2 operation:1 pursuit:1 appropriate:5 spectral:1 stepsize:3 ensure:1 graphical:1 k1:5 especially:5 establish:5 classical:3 society:1 tensor:29 objective:6 occurs:2 dependence:1 traditional:1 exhibit:2 gradient:17 microscopic:1 subspace:1 distance:1 separate:1 restart:2 chris:1 transit:1 topic:1 considers:1 unstable:17 reason:1 provable:1 index:1 remind:1 illustration:4 ratio:1 relationship:1 equivalently:1 smale:1 negative:5 upper:5 darken:1 markov:14 finite:2 descent:10 jin:2 truncated:1 peres:1 y1:1 frame:2 perturbation:1 sharp:2 arbitrary:1 bk:1 cast:4 barcelona:1 nip:1 address:1 below:1 dynamical:1 regime:1 sparsity:1 including:1 max:2 vi2:1 explanation:1 shifting:1 power:1 wainwright:3 critical:1 rely:1 regularized:1 normality:1 minimax:1 imply:1 picture:1 arora:1 concludes:1 understanding:5 geometric:2 asymptotic:3 loss:1 expect:1 highlight:1 lecture:1 ingredient:1 foundation:2 basin:6 sufficient:1 course:1 summary:2 wilmer:1 understand:3 piliouras:1 fall:1 characterizing:2 taking:1 sparse:11 departing:1 distributed:2 coupon:1 curve:3 van:1 xn:1 stuck:1 durrett:1 projected:1 approximate:2 global:16 hirsch:1 rid:1 handbook:1 unnecessary:1 conclude:7 xi:1 continuous:2 latent:2 search:2 duk:1 why:1 nature:2 robust:1 symmetry:1 complex:2 meanwhile:2 uko:3 main:1 linearly:1 noise:8 n2:3 collector:1 x1:1 referred:1 slow:1 wiley:1 fails:1 momentum:1 bkk:1 sf:6 xl:1 third:1 grained:1 theorem:8 departs:1 specific:3 hanliu:1 mobahi:1 offset:1 decay:1 admits:1 dominates:1 incorporating:1 essential:2 exists:1 adding:1 gained:1 magnitude:1 gap:3 flavor:1 chen:3 rayleigh:1 vk1:1 saddle:4 expressed:1 scalar:1 applies:1 springer:4 aa:1 minimizer:2 satisfies:1 ma:2 goal:2 towards:7 change:1 loan:1 specifically:1 uniformly:2 principal:3 lemma:3 called:2 formally:1 brevity:1 accelerated:1 princeton:2 phenomenon:4 crowdsourcing:1 |
5,866 | 6,307 | The Parallel Knowledge Gradient Method
for Batch Bayesian Optimization
Jian Wu, Peter I. Frazier
Cornell University
Ithaca, NY, 14853
{jw926, pf98}@cornell.edu
Abstract
In many applications of black-box optimization, one can evaluate multiple points
simultaneously, e.g. when evaluating the performances of several different neural
networks in a parallel computing environment. In this paper, we develop a novel
batch Bayesian optimization algorithm ? the parallel knowledge gradient method.
By construction, this method provides the one-step Bayes optimal batch of points
to sample. We provide an efficient strategy for computing this Bayes-optimal
batch of points, and we demonstrate that the parallel knowledge gradient method
finds global optima significantly faster than previous batch Bayesian optimization
algorithms on both synthetic test functions and when tuning hyperparameters of
practical machine learning algorithms, especially when function evaluations are
noisy.
1
Introduction
In Bayesian optimization [19] (BO), we wish to optimize a derivative-free expensive-to-evaluate
function f with feasible domain A ? Rd ,
min f (x),
x?A
with as few function evaluations as possible. In this paper, we assume that membership in the domain
A is easy to evaluate and we can evaluate f only at points in A. We assume that evaluations of f are
either noise-free, or have additive independent normally distributed noise. We consider the parallel
setting, in which we perform more than one simultaneous evaluation of f .
BO typically puts a Gaussian process prior distribution on the function f , updating this prior
distribution with each new observation of f , and choosing the next point or points to evaluate
by maximizing an acquisition function that quantifies the benefit of evaluating the objective as a
function of where it is evaluated. In comparison with other global optimization algorithms, BO often
finds ?near optimal? function values with fewer evaluations [19]. As a consequence, BO is useful
when function evaluation is time-consuming, such as when training and testing complex machine
learning algorithms (e.g. deep neural networks) or tuning algorithms on large-scale dataset (e.g.
ImageNet) [4]. Recently, BO has become popular in machine learning as it is highly effective in
tuning hyperparameters of machine learning algorithms [8, 9, 19, 22].
Most previous work in BO assumes that we evaluate the objective function sequentially [13], though a
few recent papers have considered parallel evaluations [3, 5, 18, 25]. While in practice, we can often
evaluate several different choices in parallel, such as multiple machines can simultaneously train the
machine learning algorithm with different sets of hyperparameters. In this paper, we assume that
we can access q ? 1 evaluations simultaneously at each iteration. Then we develop a new parallel
acquisition function to guide where to evaluate next based on the decision-theoretical analysis.
Our Contributions. We propose a novel batch BO method which measures the information gain
of evaluating q points via a new acquisition function, the parallel knowledge gradient (q-KG). This
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
method is derived using a decision-theoretic analysis that chooses the set of points to evaluate next
that is optimal in the average-case with respect to the posterior when there is only one batch of points
remaining. Naively maximizing q-KG would be extremely computationally intensive, especially when
q is large, and so, in this paper, we develop a method based on infinitesimal perturbation analysis (IPA)
[25] to evaluate q-KG?s gradient efficiently, allowing its efficient optimization. In our experiments
on both synthetic functions and tuning practical machine learning algorithms, q-KG consistently
finds better function values than other parallel BO algorithms, such as parallel EI [2, 19, 25], batch
UCB [5] and parallel UCB with exploration [3]. q-KG provides especially large value when function
evaluations are noisy. The code in this paper is available at https://github.com/wujian16/qKG.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 gives
background on Gaussian processes and defines notation used later. Section 4 proposes our new
acquisition function q-KG for batch BO. Section 5 provides our computationally efficient approach
to maximizing q-KG. Section 6 presents the empirical performance of q-KG and several benchmarks
on synthetic functions and real problems. Finally, Section 7 concludes the paper.
2
Related work
Within the past several years, the machine learning community has revisited BO [8, 9, 18, 19, 20,
22] due to its huge success in tuning hyperparameters of complex machine learning algorithms.
BO algorithms consist of two components: a statistical model describing the function and an
acquisition function guiding evaluations. In practice, Gaussian Process (GP) [16] is the mostly
widely used statistical model due to its flexibility and tractability. Much of the literature in BO
focuses on designing good acquisition functions that reach optima with as few evaluations as possible.
Maximizing this acquisition function usually provides a single point to evaluate next, with common
acquisition functions for sequential Bayesian optimization including probability of improvement
(PI)[23], expected improvement (EI) [13], upper confidence bound (UCB) [21], entropy search (ES)
[11], and knowledge gradient (KG) [17].
Recently, a few papers have extended BO to the parallel setting, aiming to choose a batch of points to
evaluate next in each iteration, rather than just a single point. [10, 19] suggests parallelizing EI by
iteratively constructing a batch, in each iteration adding the point with maximal single-evaluation EI
averaged over the posterior distribution of previously selected points. [10] also proposes an algorithm
called ?constant liar", which iteratively constructs a batch of points to sample by maximizing singleevaluation while pretending that points previously added to the batch have already returned values.
There are also work extending UCB to the parallel setting. [5] proposes the GP-BUCB policy, which
selects points sequentially by a UCB criterion until filling the batch. Each time one point is selected,
the algorithm updates the kernel function while keeping the mean function fixed. [3] proposes an
algorithm combining UCB with pure exploration, called GP-UCB-PE. In this algorithm, the first
point is selected according to a UCB criterion; then the remaining points are selected to encourage the
diversity of the batch. These two algorithms extending UCB do not require Monte Carlo sampling,
making them fast and scalable. However, UCB criteria are usually designed to minimize cumulative
regret rather than immediate regret, causing these methods to underperform in BO, where we wish to
minimize simple regret.
The parallel methods above construct the batch of points in an iterative greedy fashion, optimizing
some single-evaluation acquisition function while holding the other points in the batch fixed. The
acquisition function we propose considers the batch of points collectively, and we choose the batch to
jointly optimize this acquisition function. Other recent papers that value points collectively include
[2] which optimizes the parallel EI by a closed-form formula, [15, 25], in which gradient-based
methods are proposed to jointly optimize a parallel EI criterion, and [18], which proposes a parallel
version of the ES algorithm and uses Monte Carlo Sampling to optimize the parallel ES acquisition
function.
We compare against methods from a number of these previous papers in our numerical experiments,
and demonstrate that we provide an improvement, especially in problems with noisy evaluations.
Our method is also closely related to the knowledge gradient (KG) method [7, 17] for the non-batch
(sequential) setting, which chooses the Bayes-optimal point to evaluate if only one iteration is left
[17], and the final solution that we choose is not restricted to be one of the points we evaluate.
(Expected improvement is Bayes-optimal if the solution is restricted to be one of the points we
evaluate.) We go beyond this previous work in two aspects. First, we generalize to the parallel setting.
2
Second, while the sequential setting allows evaluating the KG acquisition function exactly, evaluation
requires Monte Carlo in the parallel setting, and so we develop more sophisticated computational
techniques to optimize our acquisition function. Recently, [26] studies a nested batch knowledge
gradient policy. However, they optimize over a finite discrete feasible set, where the gradient of KG
does not exist. As a result, their computation of KG is much less efficient than ours. Moreover, they
focus on a nesting structure from materials science not present in our setting.
3
Background on Gaussian processes
In this section, we state our prior on f , briefly discuss well known results about Gaussian processes
(GP), and introduce notation used later. We put a Gaussian process prior over the function f : A ? R,
which is specified by its mean function ?(x) : A ? R and kernel function K(x1 , x2 ) : A ? A ? R.
We assume either exact or independent normally distributed measurement errors, i.e. the evaluation
y(xi ) at point xi satisfies
y(xi ) | f (xi ) ? N (f (xi ), ? 2 (xi )),
where ? 2 : A ? R+ is a known function describing the variance of the measurement errors. If ? 2 is
not known, we can also estimate it as we do in Section 6.
Supposing we have measured f at n points x(1:n) := {x(1) , x(2) , ? ? ? , x(n) } and obtained corresponding measurements y (1:n) , we can then combine these observed function values with our prior to
obtain a posterior distribution on f . This posterior distribution is still a Gaussian process with the
mean function ?(n) and the kernel function K (n) as follows
?(n) (x) = ?(x)
?1
+ K(x, x(1:n) ) K(x(1:n) , x(1:n) ) + diag{? 2 (x(1) ), ? ? ? , ? 2 (x(n) )}
(y (1:n) ? ?(x(1:n) )),
K (n) (x1 , x2 ) = K(x1 , x2 )
?1
? K(x1 , x(1:n) ) K(x(1:n) , x(1:n) ) + diag{? 2 (x(1) ), ? ? ? , ? 2 (x(n) )}
K(x(1:n) , x2 ).
(3.1)
4
Parallel knowledge gradient (q-KG)
In this section, we propose a novel parallel Bayesian optimization algorithm by generalizing the
concept of the knowledge gradient from [7] to the parallel setting. The knowledge gradient policy in
[7] for discrete A chooses the next sampling decision by maximizing the expected incremental value
of a measurement, without assuming (as expected improvement does) that the point returned as the
optimum must be a previously sampled point.
We now show how to compute this expected incremental value of an additional iteration in the
parallel setting. Suppose that we have observed n function values. If we were to stop measuring now,
minx?A ?(n) (x) would be the minimum of the predictor of the GP. If instead we took one more batch
of samples, minx?A ?(n+q) (x) would be the minimum of the predictor of the GP. The difference
between these quantities, minx?A ?(n) (x)?minx?A ?(n+q) (x), is the increment in expected solution
quality (given the posterior after n + q samples) that results from the additional batch of samples.
This increment in solution quality is random given the posterior after n samples, because
minx?A ?(n+q) (x) is itself a random vector due to its dependence on the outcome of the samples. We can compute the probability distribution of this difference (with more details given below),
and the q-KG algorithm values the sampling decision z (1:q) := {z1 , z2 , ? ? ? , zq } according to its
expected value, which we call the parallel knowledge gradient factor, and indicate it using the notation
q-KG. Formally, we define the q-KG factor for a set of candidate points to sample z (1:q) as
q-KG(z (1:q) , A) = min ?(n) (x) ? En min ?(n+q) (x)|y(z (1:q) ) ,
(4.1)
x?A
x?A
where En [?] := E ?|x(1:n) , y (1:n) is the expectation taken with respect to the posterior distribution
after n evaluations. Then we choose to evaluate the next batch of q points that maximizes the parallel
knowledge gradient,
max q-KG(z (1:q) , A).
(4.2)
z (1:q) ?A
3
By construction, the parallel knowledge gradient policy is Bayes-optimal for minimizing the minimum
of the predictor of the GP if only one decision is remaining. The q-KG algorithm will reduce to
the parallel EI algorithm if function evaluations are noise-free and the final recommendation is
restricted to the previous sampling decisions. Because under the two conditions above, the increment
in expected solution quality will become
min ?(n) (x) ?
min
?(n+q) (x) = min y (1:n) ? min y (1:n) , min ?(n+q) (x)
x?x(1:n)
x?x(1:n) ?z (1:q)
x?z (1:q)
=
min y
(1:n)
(n+q)
? min ?
+
(x)
,
x?z (1:q)
which is exactly the parallel EI acquisition function. However, computing q-KG and its gradient is
very expensive. We will address the computational issues in Section 5. The full description of the
q-KG algorithm is summarized as follows.
Algorithm 1 The q-KG algorithm
Require: the number of initial stage samples I, and the number of main stage sampling iterations N .
1: Initial Stage: draw I initial samples from a latin hypercube design in A, x(i) for i = 1, . . . , I .
2: Main Stange:
3: for s = 1 to N do
4:
Solve (4.2), i.e. get (z1? , z2? , ? ? ? , zq? ) = argmaxz(1:q) ?A q-KG(z (1:q) , A)
5:
Sample these points (z1? , z2? , ? ? ? , zq? ), re-train the hyperparameters of the GP by MLE, and
update the posterior distribution of f .
6: end for
7: return x? = argminx?A ?(I+N q) (x).
5
Computation of q-KG
In this section, we provide the strategy to maximize q-KG by a gradient-based optimizer. In Section 5.1
and Section 5.2, we describe how to compute q-KG and its gradient when A is finite in (4.1).
Section 5.3 describes an effective way to discretize A in (4.1). The readers should note that there are
two As here, one is in (4.1) which is used to compute the q-KG factor given a sampling decision z (1:q) .
The other is the feasible domain in (4.2) (z (1:q) ? A) that we optimize over. We are discretizing the
first A.
5.1
Estimating q-KG when A is finite in (4.1)
Following [7], we express ?(n+q) (x) as
?(n+q) (x)
= ?(n) (x) + K (n) (x, z (1:q) ) K (n) (z (1:q) , z (1:q) )
?1
+diag{? 2 (z (1) ), ? ? ? , ? 2 (z (q) )}
y(z (1:q) ) ? ?(n) (z (1:q) ) .
Because y(z (1:q) ) ? ?(n) (z (1:q) ) is normally distributed with zero mean and covariance matrix
K (n) (z (1:q) , z (1:q) )+diag{? 2 (z (1) ), ? ? ? , ? 2 (z (q) )} with respect to the posterior after n observations,
we can rewrite ?(n+q) (x) as
?(n+q) (x)
=
?(n) (x) + ?
?n (x, z (1:q) )Zq ,
(5.1)
where Zq is a standard q-dimensional normal random vector, and
?
?n (x, z (1:q) )
=
K (n) (x, z (1:q) )(D(n) (z (1:q) )T )?1 ,
where D(n) (z (1:q) ) is the Cholesky factor of the covariance matrix K (n) (z (1:q) , z (1:q) ) +
diag{? 2 (z (1) ), ? ? ? , ? 2 (z (q) )}. Now we can compute the q-KG factor using Monte Carlo sampling when A is finite: we can sample Zq , compute (5.1), then plug in (4.1), repeat many times and
take average.
4
5.2
Estimating the gradient of q-KG when A is finite in (4.1)
In this section, we propose an unbiased estimator of the gradient of q-KG using IPA when A is finite.
Accessing a stochastic gradient makes optimization much easier. By (5.1), we express q-KG as
q-KG(z (1:q) , A) = EZq g(z (1:q) , A, Zq ) ,
(5.2)
where g = minx?A ?(n) (x) ? minx?A ?(n) (x) + ?
?n (x, z (1:q) )Zq . Under the condition that ? and
K are continuously differentiable, one can show that (please see the details in the supplementary
materials)
?
?
q-KG(z (1:q) , A) = EZq
g(z (1:q) , A, Zq ) ,
(5.3)
?zij
?zij
where zij is the jth dimension of the ith point in z (1:q) . By the formula of g,
?
g(z (1:q) , A, Zq )
?zij
=
? (n) ?
? (n) ?
? (x (before)) ?
? (x (after))
?zij
?zij
?
?
?n (x? (after), z (1:q) )Zq
?
?zij
where x? (before) = argminx?A ?(n) (x), x? (after) = argminx?A ?(n) (x) + ?
?n (x, z (1:q) )Zq ,
and
?
?
?
?n (x? (after), z (1:q) ) =
K (n) (x? (after), z (1:q) ) (D(n) (z (1:q) )T )?1
?zij
?zij
?K (n) (x? (after), z (1:q) )(D(n) (z (1:q) )T )?1
?
(n) (1:q) T
D (z
)
(D(n) (z (1:q) )T )?1 .
?zij
Now we can sample many times and take average to estimate the gradient of q-KG via (5.3). This
technique is called infinitesimal perturbation analysis (IPA) in gradient estimation [14]. Since we can
estimate the gradient of q-KG efficiently when A is finite, we will apply some standard gradient-based
optimization algorithms, such as multi-start stochastic gradient ascent to maximize q-KG.
5.3
Approximating q-KG when A is infinite in (4.1) through discretization
We have specified how to maximize q-KG when A is finite in (4.1), but usually A is infinite. In this
case, we will discretize A to approximate q-KG, and then maximize over the approximate q-KG. The
discretization itself is an interesting research topic [17].
In this paper, the discrete set An is not chosen statically, but evolves over time: specifically, we
suggest drawing M samples from the global optima of the posterior distribution of the Gaussian
process (please refer to [11, 18] for a description of this technique). This sample set, denoted
(1:n)
by AM
and the set of candidate
n , is then extended by the locations of previously sampled points x
points z (1:q) . Then (4.1) can be restated as
(1:q)
(n)
(n+q)
(1:q)
q-KG(z
, An ) = min ? (x) ? En min ?
(x)|y(z
) ,
(5.4)
x?An
x?An
(1:n)
where An = AM
? z (1:q) . For the experimental evaluation we recompute AM
n ?x
n in every
iteration after updating the posterior of the Gaussian process.
6
Numerical experiments
We conduct experiments in two different settings: the noise-free setting and the noisy setting. In
both settings, we test the algorithms on well-known synthetic functions chosen from [1] and practical
problems. Following previous literature [19], we use a constant mean prior and the ARD Mat?
ern 5/2
kernel. In the noisy setting, we assume that ? 2 (x) is constant across the domain A, and we estimate
it together with other hyperparameters in the GP using maximum likelihood estimation (MLE). We
set M = 1000 to discretize the domain following the strategy in Section 5.3. In general, the q-KG
5
2d BraninNoNoise function
with batch size 4
1.5
the log10 scale of the immediate regret
0.5
0.0
the log10 scale of the immediate regret
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
1.0
3d RosenbrockNoNoise function
with batch size 4
3
?0.5
?1.0
?1.5
?2.0
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
2
1
0
?1
?2
?2.5
20
40
120
140
?3
0
160
5d AckleyNoNoise function
with batch size 4
0.8
the log10 scale of the immediate regret
60
80
100
function evaluations
0.6
0.4
0.2
0.0
?0.2
?0.4
0
50
100
150
function evaluations
20
40
200
120
140
160
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
0.0
?0.5
?1.0
?1.5
0
250
60
80
100
function evaluations
6d HartmannNoNoise function
with batch size 4
0.5
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
the log10 scale of the immediate regret
?3.0
0
20
40
60
80
100
120
function evaluations
140
160
180
Figure 1: Performances on noise-free synthetic functions with q = 4. We report the mean and the standard
deviation of the log10 scale of the immediate regret vs. the number of function evaluations.
algorithm performs as well or better than state-of-art benchmark algorithms on both synthetic and
real problems. It performs especially well in the noisy setting.
Before describing the details of the empirical results, we highlight the implementation details of our
method and the open-source implementations of the benchmark methods. Our implementation inherits
the open-source implementation of parallel EI from the Metrics Optimization Engine [24],
which is fully implemented in C++ with a python interface. We reuse their GP regression and GP
hyperparameter fitting methods and implement the q-KG method in C++. Besides comparing to
parallel EI in [24], we also compare our method to a well-known heuristic parallel EI implemented in
Spearmint [12], the parallel UCB algorithm (GP-BUCB) and parallel UCB with pure exploration
(GP-UCB-PE) both implemented in Gpoptimization [6].
6.1 Noise-free problems
In this section, we focus our attention on the noise-free setting, in which we can evaluate the objective
exactly. We show that parallel knowledge gradient outperforms or is competitive with state-of-art
benchmarks on several well-known test functions and tuning practical machine learning algorithms.
6.1.1
Synthetic functions
First, we test our algorithm along with the benchmarks on 4 well-known synthetic test functions:
Branin2 on the domain [?15, 15]2 , Rosenbrock3 on the domain [?2, 2]3 , Ackley5 on the domain
[?2, 2]5 , and Hartmann6 on the domain [0, 1]6 . We initiate our algorithms by randomly sampling
2d + 2 points from a Latin hypercube design, where d is the dimension of the problem. Figure 3
reports the mean and the standard deviation of the base 10 logarithm of the immediate regret by
running 100 random initializations with batch size q = 4.
The results show that q-KG is significantly better on Rosenbrock3, Ackley5 and Hartmann6, and is
slightly worse than the best of the other benchmarks on Branin2. Especially on Rosenbrock3 and
Ackley5, q-KG makes dramatic progress in early iterations.
6.1.2
Tuning logistic regression and convolutional neural networks (CNN)
In this section, we test the algorithms on two practical problems: tuning logistic regression on the
MNIST dataset and tuning CNN on the CIFAR10 dataset. We set the batch size to q = 4.
6
First, we tune logistic regression on the MNIST dataset. This task is to classify handwritten digits
from images, and is a 10-class classification problem. We train logistic regression on a training set
with 60000 instances with a given set of hyperparameters and test it on a test set with 10000 instances.
We tune 4 hyperparameters: mini batch size from 10 to 2000, training iterations from 100 to 10000,
the `2 regularization parameter from 0 to 1, and learning rate from 0 to 1. We report the mean and
standard deviation of the test error for 20 independent runs. From the results, one can see that both
algorithms are making progress at the initial stage while q-KG can maintain this progress for longer
and results in a better algorithm configuration in general.
Logistic Regression on MNIST
0.30
0.35
test error
MOE-qEI
Spearmint-qEI
qKG
Spearmint-qEI
qKG
0.25
0.30
0.20
0.25
0.15
0.20
0.10
0.15
0.05
0
10
20 30 40 50 60
function evaluations
CNN on CIFAR10
70
0.10
10 20 30 40 50 60 70 80 90
function evaluations
Figure 2: Performances on tuning machine learning algorithms with q = 4
In the second experiment, we tune a CNN on CIFAR10 dataset. This is also a 10-class classification
problem. We train the CNN on the 50000 training data with certain hyperparameters and test it on
the test set with 10000 instances. For the network architecture, we choose the one in tensorflow
tutorial. It consists of 2 convolutional layers, 2 fully connected layers, and on top of them is a softmax
layer for final classification. We tune totally 8 hyperparameters: the mini batch size from 10 to 1000,
training epoch from 1 to 10, the `2 regularization parameter from 0 to 1, learning rate from 0 to 1,
the kernel size from 2 to 10, the number of channels in convolutional layers from 10 to 1000, the
number of hidden units in fully connected layers from 100 to 1000, and the dropout rate from 0 to 1.
We report the mean and standard deviation of the test error for 5 independent runs. In this example,
the q-KG is making better (more aggressive) progress than parallel EI even in the initial stage and
maintain this advantage to the end. This architecture has been carefully tuned by the human expert,
and achieve a test error around 14%, and our automatic algorithm improves it to around 11%.
6.2 Noisy problems
In this section, we study problems with noisy function evaluations. Our results show that the
performance gains over benchmark algorithms from q-KG evident in the noise-free setting are even
larger in the noisy setting.
6.2.1 Noisy synthetic functions
We test on the same 4 synthetic functions from the noise-free setting, and add independent gaussian
noise with standard deviation ? = 0.5 to the function evaluation. The algorithms are not given this
standard deviation, and must learn it from data.
The results in Figure 4 show that q-KG is consistently better than or at least competitive with
all competing methods. Also observe that the performance advantage of q-KG is larger than for
noise-free problems.
6.2.2 Noisy logistic regression with small test sets
Testing on a large test set such as ImageNet is slow, especially when we must test many times for
different hyperparameters. To speed up hyperparameter tuning, we may instead test the algorithm on
a subset of the testing data to approximate the test error on the full set. We study the performance of
our algorithm and benchmarks in this scenario, focusing on tuning logistic regression on MNIST.
We train logistic regression on the full training set of 60, 000, but we test the algorithm by testing on
1, 000 randomly selected samples from the test set, which provides a noisy approximation of the test
error on the full test set.
7
2d Branin function
with batch size 4
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
1.0
0.5
?0.5
?1.0
?1.5
?2.0
20
40
60
80
100
function evaluations
120
140
1.0
0.5
0.0
?0.5
?1.0
?2.0
0
160
5d Ackley function
with batch size 4
0.6
0.4
0.2
0.0
50
50
100
100
150
function evaluations
200
250
300
350
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
0.4
0.2
0.0
?0.2
?0.4
?0.6
0
250
150
200
function evaluations
6d Hartmann function
with batch size 4
0.6
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
the log10 scale of the immediate regret
the log10 scale of the immediate regret
1.5
?1.5
0.8
?0.2
0
GP-BUCB
GP-UCB-PE
MOE-qEI
Spearmint-qEI
qKG
2.0
0.0
?2.5
0
3d Rosenbrock function
with batch size 4
2.5
the log10 scale of the immediate regret
the log10 scale of the immediate regret
1.5
20
40
60
80
100
120
function evaluations
140
160
180
Figure 3: Performances on noisy synthetic functions with q = 4. We report the mean and the standard deviation
of the log10 scale of the immediate regret vs. the number of function evaluations.
0.35
Logistic Regression on MNIST with Smaller Test Sets
MOE-qEI
Spearmint-qEI
qKG
0.30
test error
0.25
0.20
0.15
0.10
0.05
0
10
20
30
40
function evaluations
50
60
70
Figure 4: Tuning logistic regression on smaller test sets with q = 4
We report the mean and standard deviation of the test error on the full set using the hyperparameters
recommended by each parallel BO algorithm for 20 independent runs. The result shows that q-KG
is better than both versions of parallel EI, and its final test error is close to the noise-free test error
(which is substantially more expensive to obtain). As we saw with synthetic test functions, q-KG?s
performance advantage in the noisy setting is wider than in the noise-free setting.
Acknowledgments
The authors were partially supported by NSF CAREER CMMI-1254298, NSF CMMI-1536895, NSF
IIS-1247696, AFOSR FA9550-12-1-0200, AFOSR FA9550-15-1-0038, and AFOSR FA9550-16-10046.
7
Conclusions
In this paper, we introduce a novel batch Bayesian optimization method q-KG, derived from a
decision-theoretical perspective, and develop a computational method to implement it efficiently. We
show that q-KG outperforms or is competitive with the state-of-art benchmark algorithms on several
synthetic functions and in tuning practical machine learning algorithms.
8
References
[1] Bingham, D. (2015). Optimization test problems. http://www.sfu.ca/~ssurjano/optimization.
html.
[2] Chevalier, C. and Ginsbourger, D. (2013). Fast computation of the multi-points expected improvement with
applications in batch selection. In Learning and Intelligent Optimization, pages 59?69. Springer.
[3] Contal, E., Buffoni, D., Robicquet, A., and Vayatis, N. (2013). Parallel gaussian process optimization with
upper confidence bound and pure exploration. In Machine Learning and Knowledge Discovery in Databases,
pages 225?240. Springer.
[4] Deng, L. and Yu, D. (2014). Deep learning: Methods and applications. Foundations and Trends in Signal
Processing, 7(3?4):197?387.
[5] Desautels, T., Krause, A., and Burdick, J. W. (2014). Parallelizing exploration-exploitation tradeoffs in
gaussian process bandit optimization. The Journal of Machine Learning Research, 15(1):3873?3923.
[6] etal, E. C. (2015).
Gpoptimization.
https://reine.cmla.ens-cachan.fr/e.contal/
gpoptimization.
[7] Frazier, P., Powell, W., and Dayanik, S. (2009). The knowledge-gradient policy for correlated normal beliefs.
INFORMS journal on Computing, 21(4):599?613.
[8] Gardner, J. R., Kusner, M. J., Xu, Z. E., Weinberger, K. Q., and Cunningham, J. (2014). Bayesian
optimization with inequality constraints. In ICML, pages 937?945.
[9] Gelbart, M., Snoek, J., and Adams, R. (2014). Bayesian optimization with unknown constraints. In
Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-14),
pages 250?259, Corvallis, Oregon. AUAI Press.
[10] Ginsbourger, D., Le Riche, R., and Carraro, L. (2010). Kriging is well-suited to parallelize optimization.
In Computational Intelligence in Expensive Optimization Problems, pages 131?162. Springer.
[11] Hern?ndez-Lobato, J. M., Hoffman, M. W., and Ghahramani, Z. (2014). Predictive entropy search for
efficient global optimization of black-box functions. In Advances in Neural Information Processing Systems,
pages 918?926.
[12] Jasper Snoek, Hugo Larochelle, R. P. A. e. (2015). Spearmint. https://github.com/HIPS/Spearmint.
[13] Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive black-box
functions. Journal of Global optimization, 13(4):455?492.
[14] L?Ecuyer, P. (1990). A unified view of the IPA, SF, and LR gradient estimation techniques. Management
Science, 36(11):1364?1383.
[15] Marmin, S., Chevalier, C., and Ginsbourger, D. (2015). Differentiating the multipoint expected improvement for optimal batch design. In International Workshop on Machine Learning, Optimization and Big Data,
pages 37?48. Springer.
[16] Rasmussen, C. E. (2006). Gaussian processes for machine learning.
[17] Scott, W., Frazier, P., and Powell, W. (2011). The correlated knowledge gradient for simulation optimization
of continuous parameters using gaussian process regression. SIAM Journal on Optimization, 21(3):996?1026.
[18] Shah, A. and Ghahramani, Z. (2015). Parallel predictive entropy search for batch global optimization of
expensive objective functions. In Advances in Neural Information Processing Systems, pages 3312?3320.
[19] Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machine learning
algorithms. In Advances in neural information processing systems, pages 2951?2959.
[20] Snoek, J., Swersky, K., Zemel, R., and Adams, R. (2014). Input warping for bayesian optimization
of non-stationary functions. In Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pages 1674?1682.
[21] Srinivas, N., Krause, A., Seeger, M., and Kakade, S. M. (2010). Gaussian process optimization in the
bandit setting: No regret and experimental design. In Proceedings of the 27th International Conference on
Machine Learning (ICML-10), pages 1015?1022.
[22] Swersky, K., Snoek, J., and Adams, R. P. (2013). Multi-task bayesian optimization. In Advances in neural
information processing systems, pages 2004?2012.
[23] T?rn, A. and Zilinskas, A. (1989). Global optimization, volume 350 of lecture notes in computer science.
[24] Wang, J., Clark, S. C., Liu, E., and Frazier, P. I. (2014). Metrics optimization engine. http://yelp.
github.io/MOE/. Last accessed on 2016-01-21.
[25] Wang, J., Clark, S. C., Liu, E., and Frazier, P. I. (2015a). Parallel bayesian global optimization of expensive
functions.
[26] Wang, Y., Reyes, K. G., Brown, K. A., Mirkin, C. A., and Powell, W. B. (2015b). Nested-batch-mode
learning and stochastic optimization with an application to sequential multistage testing in materials science.
SIAM Journal on Scientific Computing, 37(3):B361?B381.
9
| 6307 |@word cnn:5 briefly:1 version:2 exploitation:1 open:2 multipoint:1 underperform:1 zilinskas:1 simulation:1 covariance:2 dramatic:1 initial:5 configuration:1 ndez:1 liu:2 zij:10 tuned:1 ours:1 reine:1 past:1 outperforms:2 com:2 z2:3 discretization:2 comparing:1 must:3 additive:1 numerical:2 burdick:1 designed:1 update:2 v:2 stationary:1 greedy:1 fewer:1 selected:5 intelligence:2 ith:1 rosenbrock:1 fa9550:3 lr:1 provides:5 recompute:1 revisited:1 location:1 accessed:1 branin:1 along:1 become:2 consists:1 combine:1 fitting:1 introduce:2 snoek:5 expected:10 multi:3 totally:1 spain:1 estimating:2 notation:3 moreover:1 maximizes:1 kg:56 substantially:1 unified:1 hartmann6:2 every:1 auai:1 exactly:3 normally:3 wujian16:1 unit:1 before:3 yelp:1 consequence:1 aiming:1 io:1 parallelize:1 contal:2 black:3 initialization:1 suggests:1 averaged:1 practical:7 acknowledgment:1 testing:5 practice:2 regret:15 implement:2 digit:1 powell:3 empirical:2 significantly:2 confidence:2 suggest:1 get:1 close:1 selection:1 put:2 optimize:7 www:1 maximizing:6 lobato:1 go:1 attention:1 restated:1 welch:1 pure:3 schonlau:1 estimator:1 nesting:1 increment:3 construction:2 suppose:1 exact:1 cmla:1 us:1 designing:1 trend:1 expensive:7 updating:2 database:1 observed:2 ackley:1 wang:3 connected:2 kriging:1 accessing:1 environment:1 multistage:1 rewrite:1 predictive:2 train:5 fast:2 effective:2 describe:1 monte:4 artificial:1 zemel:1 choosing:1 outcome:1 heuristic:1 widely:1 solve:1 supplementary:1 larger:2 drawing:1 ecuyer:1 gp:29 jointly:2 noisy:14 itself:2 final:4 advantage:3 differentiable:1 took:1 propose:4 maximal:1 fr:1 causing:1 combining:1 flexibility:1 achieve:1 description:2 optimum:4 extending:2 spearmint:14 incremental:2 adam:4 wider:1 develop:5 informs:1 measured:1 ard:1 progress:4 implemented:3 indicate:1 larochelle:2 closely:1 stochastic:3 exploration:5 human:1 material:3 liar:1 require:2 qei:21 around:2 considered:1 normal:2 optimizer:1 early:1 estimation:3 saw:1 hoffman:1 gaussian:15 rather:2 cornell:2 thirtieth:1 derived:2 focus:3 inherits:1 frazier:5 consistently:2 improvement:7 likelihood:1 seeger:1 am:3 membership:1 typically:1 dayanik:1 hidden:1 bandit:2 cunningham:1 selects:1 issue:1 classification:3 hartmann:1 html:1 denoted:1 proposes:5 art:3 softmax:1 construct:2 sampling:9 yu:1 icml:3 filling:1 jones:1 report:6 intelligent:1 few:4 randomly:2 simultaneously:3 argminx:3 maintain:2 huge:1 ssurjano:1 highly:1 evaluation:34 chevalier:2 encourage:1 cifar10:3 conduct:1 logarithm:1 re:1 theoretical:2 instance:3 classify:1 hip:1 measuring:1 tractability:1 deviation:8 subset:1 predictor:3 synthetic:13 chooses:3 st:1 international:3 siam:2 together:1 continuously:1 management:1 choose:5 worse:1 expert:1 derivative:1 return:1 aggressive:1 diversity:1 summarized:1 oregon:1 later:2 view:1 closed:1 bucb:10 start:1 bayes:5 competitive:3 parallel:42 contribution:1 minimize:2 convolutional:3 variance:1 efficiently:3 generalize:1 bayesian:13 handwritten:1 carlo:4 simultaneous:1 reach:1 infinitesimal:2 against:1 acquisition:15 gain:2 sampled:2 dataset:5 stop:1 popular:1 knowledge:17 improves:1 organized:1 sophisticated:1 carefully:1 focusing:1 evaluated:1 box:3 though:1 just:1 stage:5 until:1 ei:13 defines:1 logistic:10 mode:1 quality:3 scientific:1 concept:1 unbiased:1 brown:1 regularization:2 iteratively:2 please:2 robicquet:1 criterion:4 evident:1 theoretic:1 demonstrate:2 gelbart:1 performs:2 reyes:1 interface:1 image:1 novel:4 recently:3 common:1 jasper:1 hugo:1 volume:1 measurement:4 refer:1 corvallis:1 tuning:14 rd:1 automatic:1 etal:1 access:1 longer:1 base:1 add:1 posterior:11 recent:2 perspective:1 optimizing:1 optimizes:1 scenario:1 certain:1 inequality:1 discretizing:1 success:1 minimum:3 additional:2 deng:1 maximize:4 recommended:1 signal:1 ii:1 multiple:2 full:5 faster:1 plug:1 mle:2 scalable:1 regression:12 expectation:1 metric:2 iteration:9 kernel:5 buffoni:1 vayatis:1 background:2 krause:2 jian:1 source:2 ithaca:1 rest:1 ascent:1 supposing:1 call:1 near:1 latin:2 easy:1 architecture:2 competing:1 marmin:1 reduce:1 riche:1 tradeoff:1 intensive:1 reuse:1 peter:1 returned:2 deep:2 useful:1 tune:4 http:5 exist:1 nsf:3 ipa:4 tutorial:1 discrete:3 hyperparameter:2 mat:1 express:2 year:1 run:3 uncertainty:1 swersky:2 reader:1 wu:1 sfu:1 draw:1 decision:8 cachan:1 dropout:1 bound:2 layer:5 annual:1 constraint:2 x2:4 aspect:1 speed:1 min:12 extremely:1 statically:1 ern:1 according:2 describes:1 across:1 slightly:1 smaller:2 kusner:1 kakade:1 evolves:1 making:3 restricted:3 taken:1 computationally:2 previously:4 hern:1 describing:3 discus:1 initiate:1 end:2 available:1 apply:1 observe:1 batch:41 weinberger:1 shah:1 assumes:1 remaining:3 include:1 running:1 top:1 log10:10 ghahramani:2 especially:7 approximating:1 hypercube:2 warping:1 objective:4 added:1 already:1 quantity:1 strategy:3 cmmi:2 dependence:1 gradient:31 minx:7 topic:1 considers:1 assuming:1 code:1 besides:1 mini:2 minimizing:1 mostly:1 holding:1 design:4 implementation:4 policy:5 pretending:1 perform:1 allowing:1 upper:2 discretize:3 observation:2 unknown:1 benchmark:9 finite:8 immediate:12 extended:2 rn:1 perturbation:2 parallelizing:2 community:1 carraro:1 moe:11 specified:2 z1:3 imagenet:2 engine:2 tensorflow:1 barcelona:1 nip:1 address:1 beyond:1 usually:3 below:1 scott:1 including:1 max:1 belief:1 github:3 gardner:1 concludes:1 prior:6 review:1 literature:2 python:1 epoch:1 discovery:1 afosr:3 fully:3 lecture:1 highlight:1 interesting:1 clark:2 foundation:1 desautels:1 pi:1 repeat:1 supported:1 free:12 keeping:1 jth:1 rasmussen:1 last:1 guide:1 differentiating:1 distributed:3 benefit:1 dimension:2 evaluating:4 cumulative:1 author:1 ginsbourger:3 approximate:3 global:9 sequentially:2 uai:1 consuming:1 xi:6 search:3 iterative:1 bingham:1 quantifies:1 zq:12 continuous:1 channel:1 learn:1 ca:1 career:1 complex:2 constructing:1 domain:9 diag:5 main:2 big:1 noise:13 hyperparameters:12 x1:4 xu:1 en:4 fashion:1 ny:1 slow:1 guiding:1 wish:2 sf:1 candidate:2 pe:10 formula:2 naively:1 consist:1 mnist:5 workshop:1 sequential:4 adding:1 argmaxz:1 easier:1 suited:1 entropy:3 generalizing:1 partially:1 bo:15 recommendation:1 collectively:2 springer:4 nested:2 satisfies:1 feasible:3 infinite:2 specifically:1 called:3 e:3 experimental:2 ucb:21 formally:1 cholesky:1 evaluate:17 srinivas:1 correlated:2 |
5,867 | 6,308 | Anchor-Free Correlated Topic Modeling:
Identifiability and Algorithm
Kejun Huang?
Xiao Fu?
Nicholas D. Sidiropoulos
Department of Electrical and Computer Engineering
University of Minnesota
Minneapolis, MN 55455, USA
[email protected] [email protected] [email protected]
Abstract
In topic modeling, many algorithms that guarantee identifiability of the topics have
been developed under the premise that there exist anchor words ? i.e., words that
only appear (with positive probability) in one topic. Follow-up work has resorted
to three or higher-order statistics of the data corpus to relax the anchor word
assumption. Reliable estimates of higher-order statistics are hard to obtain, however,
and the identification of topics under those models hinges on uncorrelatedness of
the topics, which can be unrealistic. This paper revisits topic modeling based on
second-order moments, and proposes an anchor-free topic mining framework. The
proposed approach guarantees the identification of the topics under a much milder
condition compared to the anchor-word assumption, thereby exhibiting much
better robustness in practice. The associated algorithm only involves one eigendecomposition and a few small linear programs. This makes it easy to implement
and scale up to very large problem instances. Experiments using the TDT2 and
Reuters-21578 corpus demonstrate that the proposed anchor-free approach exhibits
very favorable performance (measured using coherence, similarity count, and
clustering accuracy metrics) compared to the prior art.
1
Introduction
Given a large collection of text data, e.g., documents, tweets, or Facebook posts, a natural question is
what are the prominent topics in these data. Mining topics from a text corpus is motivated by a number
of applications, from commercial design, news recommendation, document classification, content
summarization, and information retrieval, to national security. Topic mining, or topic modeling, has
attracted significant attention in the broader machine learning and data mining community [1].
In 2003, Blei et al. proposed a Latent Dirichlet Allocation (LDA) model for topic mining [2], where
the topics are modeled as probability mass functions (PMFs) over a vocabulary and each document
is a mixture of the PMFs. Therefore, a word-document text data corpus can be viewed as a matrix
factorization model. Under this model, posterior inference-based methods and approximations were
proposed [2, 3], but identifiability issues ? i.e., whether the matrix factors are unique ? were not
considered. Identifiability, however, is essential for topic modeling since it prevents the mixing of
topics that confounds interpretation.
In recent years, considerable effort has been invested in designing identifiable models and estimation
criteria as well as polynomial time solvable algorithms for topic modeling [4, 5, 6, 7, 8, 9, 10, 11].
Essentially, these algorithms are based on the so-called separable nonnegative matrix factorization
(NMF) model [12]. The key assumption is that every topic has an ?anchor word? that only appears
in that particular topic. Based on this assumption, two classes of algorithms are usually employed,
namely linear programming based methods [5, 7] and greedy pursuit approaches [11, 6, 8, 10]. The
?
These authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
former class has a serious complexity issue, as it lifts the number of variables to the square of the size
of vocabulary (or documents); the latter, although computationally very efficient, usually suffers from
error propagation, if at some point one anchor word is incorrectly identified. Furthermore, since all
the anchor word-based approaches essentially convert topic identification to the problem of seeking
the vertices of a simplex, most of the above algorithms require normalizing each data column (or row)
by its `1 norm. However, normalization at the factorization stage is usually not desired, since it may
destroy the good conditioning of the data matrix brought by pre-processing and amplify noise [8].
Unlike many NMF-based methods that work directly with the word-document data, the approach
proposed by Arora et al. [9, 10] works with the pairwise word-word correlation matrix, which has
the advantage of suppressing sampling noise and also features better scalability. However, [9, 10]
did not relax the anchor-word assumption or the need for normalization, and did not explore the
symmetric structure of the co-occurrence matrix ? i.e., the algorithms in [9, 10] are essentially the
same asymmetric separable NMF algorithms as in [4, 6, 8].
The anchor-word assumption is reasonable in some cases, but using models without it is more
appealing in more critical scenarios, e.g., when some topics are closely related and many key words
overlap. Identifiable models without anchor words have been considered in the literature; e.g.,
[13, 14, 15] make use of third or higher-order statistics of the data corpus to formulate the topic
modeling problem as a tensor factorization problem. There are two major drawbacks with this
approach: i) third- or higher-order statistics require a lot more samples for reliable estimation relative
to their lower-order counterparts (e.g., second-order word correlation statistics); and ii) identifiability
is guaranteed only when the topics are uncorrelated ? where a super-symmetric parallel factor analysis
(PARAFAC) model can be obtained [13, 14]. Uncorrelatedness is a restrictive assumption [10]. When
the topics are correlated, the model becomes a Tucker model which is not identifiable in general;
identifiability needs more assumptions, e.g., sparsity of topic PMFs [15].
Contributions. In this work, our interest lies in topic mining using word-word correlation matrices
like in [9, 10], because of its potential scalability and noise robustness. We propose an anchor-free
identifiable model and a practically implementable companion algorithm. Our contributions are twofold: First, we propose an anchor-free topic identification criterion. The criterion aims at factoring
the word-word correlation matrix using a word-topic PMF matrix and a topic-topic correlation matrix
via minimizing the determinant of the topic-topic correlation matrix. We show that under a so-called
sufficiently scattered condition, which is much milder than the anchor-word assumption, the two
matrices can be uniquely identified by the proposed criterion. We emphasize that the proposed
approach does not need to resort to higher-order statistics tensors to ensure topic identifiability, and
it can naturally deal with correlated topics, unlike what was previously available in topic modeling,
to the best of our knowledge. Second, we propose a simple procedure for handling the proposed
criterion that only involves eigen-decomposition of a large but sparse matrix, plus a few small linear
programs ? therefore highly scalable and well-suited for topic mining. Unlike greedy pursuit-based
algorithms, the proposed algorithm does not involve deflation and is thus free from error propagation;
it also does not require normalization of the data columns / rows. Carefully designed experiments
using the TDT2 and Reuters text corpora showcase the effectiveness of the proposed approach.
2
Background
Consider a document corpus D ? RV ?D , where each column of D corresponds to a document and
D(v, d) denotes a certain measurement of word v in document d, e.g., the word-frequency of term v
in document d or the term frequency?inverse document frequency (tf-idf) measurement that is often
used in topic mining. A commonly used model is
D ? CW ,
(1)
where C ? RV ?F is the word-topic matrix, whose f -th column C(:, f ) represents the probability
mass function (PMF) of topic f over a vocabulary of words, and W (f, d) denotes the weight of topic
f in document d [2, 13, 10]. Since matrix C and W are both nonnegative, (1) becomes a nonnegative
matrix factorization (NMF) model ? and many early works tried to use NMF and variants to deal with
this problem [16]. However, NMF does not admit a unique solution in general, unless both C and W
satisfy some sparsity-related conditions [17]. In recent years, much effort has been put in devising
polynomial time solvable algorithms for NMF models that admit unique factorization. Such models
and algorithms usually rely on an assumption called ?separability? in the NMF literature [12]:
Assumption 1 (Separability / Anchor-Word Assumption) There exists a set of indices ? =
{v1 , . . . , vF } such that C(?, :) = Diag(c), where c ? RF .
2
In topic modeling, it turns out that the separability condition has a nice physical interpretation, i.e.,
every topic f for f = 1, . . . , F has a ?special? word that has nonzero probability of appearing in topic
f and zero probability of appearing in other topics. These words are called ?anchor words? in the
topic modeling literature. Under Assumption 1, the task of matrix factorization boils down to finding
these anchor words v1 , . . . , vF since D(?, :) = Diag(c)W ? which is already a scaled version of
W ? and then C can be estimated via (constrained) least squares.
Many algorithms have been proposed to tackle this indexpicking problem in the context of separable NMF, hyperSuccessive Projection Algorithm [6]
spectral unmixing, and text mining. The arguably simplest
input : D; F .
algorithm is the so-called successive projection algorithm
? = 1T D T
(SPA) [6] that is presented in Algorithm 1. SPA-like algoX = D T ??1 (normalization);
rithms first define a normalized matrix X = D T ??1 where
? = ?;
for f = 1, . . . , F do
? = Diag(1T D T ) [11]. Note that X = GS where G(:
v
?f ? arg maxv?{1,...,V } kX(:, v)k2 ;
T
C(v,f )kW (f,:)k1
? ? [?, v
?f ];
, f ) = W (f,:)/kW (f,:)k1 and S(f, v) = kC(v,:)k
.
1 kD(v,:)k1
2
? ? arg min? kX ? X(:, ?)?kF ;
T
T
Consequently, we have 1 S = 1 if W ? 0, meaning the
X ? X ? X(:, ?)?;
end
columns of X all lie on the simplex spanned by the columns
output : ?
of G, and the vertices of the simplex correspond to the anchor
words. Also, the columns of S all live in the unit simplex.
After normalization, SPA sequentially identifies the vertices of the data simplex, in conjunction with
a deflation procedure. The algorithms in [8, 10, 11] can also be considered variants of SPA, with
different deflation procedures and pre-/post-processing. In particular, the algorithm in [8] avoids
normalization ? for real-word data, normalization at the factorization stage may amplify noise
and damage the good conditioning of the data matrix brought by pre-processing, e.g., the tf-idf
procedure [8]. To pick out vertices, there are also algorithms using linear programming and sparse
optimization [7, 5], but these have serious scalability issues and thus are less appealing.
Algorithm 1:
In practice D may contain considerable noise, and this has been noted in the literature. In [9, 10,
14, 15], the authors proposed to use second and higher-order statistics for topic mining. Particularly,
Arora et al. [9, 10] proposed to work with the following matrix:
P = E{DD T } = CEC T ,
(2)
T
where E = E{W W } can be interpreted as a topic-topic correlation matrix. The matrix P is by
definition a word-word correlation matrix, but also has a nice interpretation: if D(v, d) denotes the
frequency of word v occurring in document d, P (i, j) is the likelihood that term i and j co-occur
in a document [9, 10]. There are two advantages in using P : i) if there is zero-mean white noise, it
will be significantly suppressed through the averaging process; and ii) the size of P does not grow
with the size of the data if the vocabulary is fixed. The latter is a desired property when the number
of documents is very large, and we pick a (possibly limited but) manageable vocabulary to work
with. Problems with similar structure to that of P also arise in the context of graph models, where
communities and correlations appear as the underlying factors. The algorithm proposed in [10] also
makes use of Assumption 1 and is conceptually close to Algorithm 1. The work in [13, 14, 15]
relaxed the anchor-word assumption. The methods there make use of three or higher-order statistics,
e.g., P ? RV ?V ?V whose (i, j, k)th entry represents the co-occurrence of three terms. The work in
[13, 14] showed that P is a tensor satisfying the parallel factor analysis (PARAFAC) model and thus
C is uniquely identifiable, if the topics are uncorrelated, which is a restrictive assumption (a counter
example would be politics and economy). When the topics are correlated, additional assumptions like
sparsity are needed to restore identifiability [15]. Another important concern is that reliable estimates
of higher-order statistics require much larger data sizes, and tensor decomposition is computationally
cumbersome as well.
Remark 1 Among all the aforementioned methods, the deflation-based methods are seemingly more
efficient. However, if the deflation procedure in Algorithm 1 (the update of ?) has constraints like
in [8, 11], there is a serious complexity issue: solving a constrained least squares problem with
F V variables is not an easy task. Data sparsity is destroyed after the first deflation step, and thus
even first-order methods or coordinate descent as in [8, 11] do not really help. This point will be
exemplified in our experiments.
3
Anchor-Free Identifiable Topic Mining
In this work, we are primarily interested in mining topics from the matrix P because of its noise
robustness and scalability. We will formulate topic modeling as an optimization problem, and show
3
that the word-topic matrix C can be identified under a much more relaxed condition, which includes
the relatively strict anchor-word assumption as a special case.
3.1 Problem Formulation
Let us begin with the model P = CEC T , subject to the constraint that each column of C represents
the PMF of words appearing in a specific topic, such that C T 1 = 1, C ? 0. Such a symmetric
matrix decomposition is in general not identifiable, as we can always pick a non-singular matrix
? = CA, E
? = A?1 CA?1 , and then
A ? RF ?F such that AT 1 = 1, A ? 0, and define C
T
T
?E
?C
? with C
? 1 = 1, C
? ? 0. We wish to find an identification criterion such that under
P =C
some mild conditions the corresponding solution can only be the ground-truth E and C up to some
trivial ambiguities such as a common column permutation. To this end, we propose the following
criterion:
minimize
E?RF ?F ,C?RV ?F
| det E|,
subject to P = CEC T , C T 1 = 1, C ? 0.
(3)
The first observation is that if the anchor-word assumption is satisfied, the optimal solutions of the
above identification criterion are the ground-truth C and E and their column-permuted versions.
Formally, we show that:
Proposition 1 Let (C ? , E ? ) be an optimal solution of (3). If the separability / anchor-word assumption (cf. Assumption 1) is satisfied and rank(P ) = F , then C ? = C? and E ? = ? T E?, where
? is a permutation matrix.
The proof of Proposition 1 can be found in the supplementary material. Proposition 1 is merely a
?sanity check? of the identification criterion in (3): It shows that the criterion is at least a sound one
under the anchor-word assumption. Note that, when the anchor-word assumption is satisfied, SPAtype algorithms are in fact preferable over the identification criterion in (3), due to their simplicity.
The point of the non-convex formulation in (3) is that it can guarantee identifiability of C and E
even when the anchor-word assumption is grossly violated. To explain, we will need the following.
Assumption 2 (sufficiently scattered) Let cone(C T )? denote the polyhedral cone {x : Cx ? 0},
and K denote the second-order cone {x : kxk2 ? 1T x}. Matrix C is called sufficiently scattered
if it satisfies that: (i) cone(C T )? ? K, and (ii) cone(C T )? ? bdK = {?ef : ? ? 0, f = 1, . . . , F },
where bdK denotes the boundary of K, i.e., bdK = {x : kxk2 = 1T x}.
Our main result is based on this assumption, whose first consequence is as follows:
Lemma 1 If C ? RV ?F is sufficiently scattered, then rank(C) = F . In addition, given rank(P ) =
? ? RF ?F of Problem (3) has full rank and thus | det E|
? > 0.
F , any feasible solution E
? E)
? of Problem (3) has full rank F when the
Lemma 1 ensures that any feasible solution pair (C,
ground-truth C is sufficiently scattered, which is important from the optimization perspective ?
? can always be zero which is a trivial optimal solution of (3). Based on Lemma 1,
otherwise | det E|
we further show that:
Theorem 1 Let (C ? , E ? ) be an optimal solution of (3). If the ground truth C is sufficiently scattered
(cf. Assumption 2) and rank(P ) = F , then C ? = C? and E ? = ? T E?, where ? is a
permutation matrix.
The proof of Theorem 1 is relegated to the supplementary material. In words, for a sufficiently
scattered C and an arbitrary square matrix E, given P = CEC T , C and E can be identified up to
permutation via solving (3). To understand the sufficiently scattered condition and Theorem 2, it is
better to look at the dual cones. The notation cone(C T )? = {x : Cx ? 0} comes from the fact that
it is the dual cone of the conic hull of the row vectors of C, i.e., cone(C T ) = {C T ? : ? ? 0}. A
useful property of dual cone is that for two convex cones, if K1 ? K2 , then K2? ? K1? , which means
the first requirement of Assumption 2 is equivalent to
K? ? cone(C T ).
?
T
?
(4)
Note that the dual cone of K is another second-order cone [12], i.e., K = {x|x 1 ? F ? 1kxk2 },
which is tangent to and contained in the nonnegative orthant. Eq. (4) and the definition of K? in
4
(a) separable / anchor word
(b) sufficiently scattered
(c) not identifiable
Figure 1: A graphical view of rows of C (blue dots) and various cones in R3 , sliced at the plane
1T x = 1. The triangle indicates the non-negative orthant, the enclosing circle is K, and the smaller
circle is K? . The shaded region is cone(C T ), and the polygon with dashed sides is cone(C T )? . The
matrix C can be identified up to column permutation in the left two cases, and clearly separability is
more restrictive than (and a special case of) sufficiently scattered.
fact give a straightforward comparison between the proposed sufficiently scattered condition and
the existing anchor-word assumption. An illustration of Assumptions 1 and 2 is shown in Fig. 1
(a)-(b) using an F = 3 case, where one can see that sufficiently scattered is much more relaxed
compared to the anchor-word assumption: if the rows of the word-topic matrix C are geometrically
scattered enough so that cone(C T ) contains the inner circle (i.e., the second-order cone K? ), then
the identifiability of the criterion in (3) is guaranteed. However, the anchor-word assumption requires
that cone(C T ) fulfills the entire triangle, i.e., the nonnegative orthant, which is far more restrictive.
Fig. 1(c) shows a case where rows of C are not ?well scattered? in the non-negative orthant, and
indeed such a matrix C cannot be identified via solving (3).
Remark 2 A salient feature of the criterion in (3) is that it does not need to normalize the data
columns to a simplex ? all the arguments in Theorem 1 are cone-based. The upshot is clear: there is
no risk of amplifying noise or changing the conditioning of P at the factorization stage. Furthermore,
matrix E can be any symmetric matrix; it can contain negative values, which may cover more
applications beyond topic modeling where E is always nonnegative and positive semidefinite. This
shows the surprising effectiveness of the sufficiently scattered condition.
The sufficiently scattered assumption appeared in identifiability proofs of several matrix factorization
models [17, 18, 19] with different identification criteria. Huang et al. [17] used this condition
to show the identifiability of plain NMF, while Fu et al. [19] related the sufficiently scattered
condition to the so-called volume-minimization criterion for blind source separation. Note that volume
minimization also minimizes a determinant-related cost function. Like the SPA-type algorithms,
volume minimization works with data that live in a simplex, therefore applying it still requires data
normalization, which is not desired in practice. Theorem 1 can be considered as a more natural
application of the sufficiently scattered condition to co-occurrence/correlation based topic modeling,
which explores the symmetry of the model and avoids normalization.
3.2 AnchorFree: A Simple and Scalable Algorithm
The identification criterion in (3) imposes an interesting yet challenging optimization problem. One
way to tackle it is to consider the following approximation:
2
minimize
P ? CEC T
F + ?| det E|, subject to C ? 0, C T 1 = 1,
(5)
E,C
where ? ? 0 balances the data fidelity and the minimal determinant criterion. The difficulty is
that the term CEC T makes the problem tri-linear and not easily decoupled. Plus, tuning a good
? may also be difficult. In this work, we propose an easier procedure of handling the determinantminimization problem in (3), which is summarized in Algorithm 2, and referred to as AnchorFree.
To explain the procedure, first notice that P is symmetric and positive semidefinite. Therefore, one
can apply square root decomposition to P = BB T , where B ? RV ?F . We can take advantage
of well-established tools for eigen-decomposition of sparse matrices, and there is widely available
software that can compute this very efficiently. Now, we have B = CE 1/2 Q,QT Q = QQT = I,
and E = E 1/2 E 1/2 ; i.e., the representing coefficients of CE 1/2 in the range space of B must be
orthonormal because of the symmetry of P . We also notice that
minimize | det E 1/2 Q|, subject to B = CE 1/2 Q, C T 1 = 1, C ? 0, QT Q = I,
E,C,Q
5
(6)
has the same optimal solutions as (3). Since Q is unitary, it does not affect the determinant, so we
further let M = QT E ?1/2 and obtain the following optimization problem
maximize | det M |, subject to M T B T 1 = 1, BM ? 0.
M
(7)
By our reformulation, C has been marginalized and we have only F 2 variables left, which is
significantly smaller compared to the variable size of the original problem V F + F 2 , where V is
the vocabulary size. Problem (7) is still non-convex, but can be handled very efficiently. Here, we
propose to employ the solver proposed in [18], where the same subproblem (7) was used to solve
a dynamical system identification problem. The idea is to apply the co-factor expansion to deal
with the determinant objective function, first proposed in the context of non-negative blind source
separation [20]: if we fix all the columns of M except the f th one, det M becomes a linear function
PF
? k,f = aT M (:, f ), where
with respect to M (:, f ), i.e., det M = k=1 (?1)f +k M (k, f ) det M
T
f +k
? k,f , ? k = 1, ..., F , and M
? k,f is a matrix obtained by
a = [a1 , . . . , aF ] , ak = (?1)
det M
removing the kth row and f th column of M . Maximizing |aT x| subject to linear constraints is
still a non-convex problem, but we can solve it via maximizing both aT x and ?aT x, followed by
picking the solution that gives larger absolute objective. Then, cyclically updating the columns of M
results in an alternating optimization (AO) algorithm. The algorithm is computationally lightweight:
each linear program only involves F variables, leading to a worst-case complexity of O(F 3.5 ) flops
even when the interior-point method is employed, and empirically it takes 5 or less AO iterations
to converge. In the supplementary material, simulations on synthetic data are given, showing that
Algorithm 2 can indeed recover the ground truth matrix C and E even when matrix C grossly
violates the separability / anchor-word assumption.
Algorithm 2: AnchorFree
input : D, F .
P ? Co-Occurrence(D);
P = BB T , M ? I;
repeat
for f = 1, . . . , F do
? k,f , ? k = 1, ..., F ;
ak = (?1)f +k det M
? k,f
// remove k-th row and f -th column of M to obtain M
T
T
mmax = arg maxx a x s.t. Bx ? 0, 1 Bx = 1;
mmin = arg minx aT x s.t. Bx ? 0, 1T Bx = 1;
M (:, f ) = arg maxmmax ,mmin (|aT mmax |, |aT mmin |);
end
until convergence;
C ? = BM ;
E ? = (C T? C ? )?1 C T? P C ? (C T? C ? )?1 ;
output : C ? , E ?
4
Experiments
Data In this section, we apply the proposed algorithm and the baselines to two popular text mining
datasets, namely, the NIST Topic Detection and Tracking (TDT2) and the Reuters-21578 corpora.
We use a subset of the TDT2 corpus consisting of 9,394 documents which are single-category
articles belonging to the largest 30 categories. The Reuters-21578 corpus is the ModApte version
where 8,293 single-category documents are kept. The original vocabulary sizes of the TDT2 and
the Reuters dataset are 36, 771 and 18, 933, respectively, and stop words are removed for each trial
of the experiments. We use the standard tf-idf data as the D matrix, and estimate the correlation
matrix using the biased estimator suggested in [9]. A standard pre-processing technique, namely,
normalized-cut weighted (NCW) [21], is applied to D; NCW is a well-known trick for handling the
unbalanced-cluster-size problem. For each trial of our experiment, we randomly draw F categories
of documents, form the P matrix, and apply the proposed algorithm and the baselines.
Baselines We employ several popular anchor word-based algorithms as baselines. Specifically,
the successive projection algorithm (SPA) [6], the successive nonnegative projection algorithm
(SNPA) [11], the XRAY algorithm [8], and the fast anchor words (FastAnchor) [10] algorithm.
Since we are interested in word-word correlation/co-occurrence based mining, all the algorithms are
6
Table 1: Experiment results on the TDT2 corpus.
F FastAchor SPA
3 -612.72 -613.43
4 -648.20 -648.04
5 -641.79 -643.91
6 -654.18 -645.68
7 -668.92 -665.55
8 -681.35 -674.45
9 -688.54 -671.81
10 -732.39 -724.64
15 -734.13 -730.19
20 -756.90 -747.99
25 -792.92 -792.29
Coh
SNPA
-613.43
-648.04
-643.91
-645.68
-665.55
-674.45
-671.81
-724.64
-730.19
-747.99
-792.29
SimCount
ClustAcc
XRAY AnchorFree FastAchor SPA SNPA XRAY AnchorFree FastAchor SPA SNPA XRAY AnchorFree
-597.16 -433.87
7.98
7.98 7.98 8.94
1.84
0.71
0.74 0.75 0.73
0.98
-657.51 -430.07
10.60
11.18 11.18 13.70
2.88
0.70
0.69 0.69 0.69
0.94
-665.20 -405.19
13.06
13.36 13.36 22.56
4.40
0.63
0.63 0.62 0.64
0.92
-674.30 -432.96
18.94
18.10 18.10 31.56
7.18
0.65
0.58 0.59 0.60
0.91
-664.38 -397.77
20.14
18.84 18.84 39.06
4.48
0.62
0.60 0.59 0.58
0.90
-657.78 -450.63
24.82
25.14 25.14 40.30
9.12
0.57
0.56 0.58 0.57
0.87
-690.39 -416.44
27.50
29.10 29.10 53.68
9.70
0.61
0.58 0.58 0.53
0.86
-698.59 -421.25
31.08
29.86 29.86 53.16
13.02
0.59
0.55 0.54 0.49
0.85
-773.17 -445.30
51.62
52.62 52.62 59.96
41.88
0.51
0.50 0.50 0.42
0.80
-819.36 -461.64
66.26
65.00 65.00 82.92
79.60
0.47
0.47 0.47 0.38
0.77
-876.28 -473.95
69.46
66.00 66.00 101.52
133.42
0.46
0.47 0.47 0.37
0.74
Table 2: Experiment results on the Reuters-21578 corpus.
Coh
SimCount
ClustAc
F FastAchor SPA
SNPA XRAY AnchorFree FastAchor SPA SNPA XRAY AnchorFree FastAchor SPA SNPA XRAY AnchorFree
3 -652.67 -647.28 -647.28 -574.72 -830.24
10.98
11.02 11.02 3.86
7.36
0.66
0.69 0.69 0.66
0.79
4 -633.69 -637.89 -637.89 -586.41 -741.35
16.74
16.92 16.92 9.92
12.66
0.51
0.61 0.61 0.60
0.73
5 -650.49 -652.53 -652.53 -581.73 -762.64
21.74
21.66 21.66 13.06
15.48
0.51
0.55 0.55 0.52
0.65
6 -654.74 -644.34 -644.34 -586.00 -705.60
39.9
39.54 39.54 27.42
19.98
0.47
0.49 0.50 0.46
0.64
7 -733.73 -732.01 -732.01 -612.97 -692.12
47.02
45.24 45.24 34.64
35.62
0.43
0.57 0.57 0.54
0.65
8 -735.23 -738.54 -738.54 -616.32 -726.37
85.04
83.86 83.86 82.52
62.02
0.40
0.53 0.54 0.47
0.61
9 -761.27 -755.46 -755.46 -640.36 -713.81
117.48 118.98 118.98 119.28
72.38
0.37
0.56 0.56 0.47
0.59
10 -764.18 -759.40 -759.40 -656.71 -709.48
119.54 121.74 121.74 130.82
86.02
0.35
0.52 0.52 0.42
0.59
15 -800.51 -801.17 -801.17 -585.18 -688.39
307.86
309.7 309.7 227.02
124.6
0.33
0.40 0.40 0.42
0.53
20 -859.48 -860.70 -860.70 -615.62 -683.64
539.58 538.54 538.54 502.82
225.6
0.31
0.36 0.36 0.38
0.52
25 -889.55 -890.16 -890.16 -633.75 -672.44
674.78
673
673 650.96
335.24
0.26
0.33 0.32 0.37
0.47
combined with the framework provided in [10] and the efficient RecoverL2 process is employed for
estimating the topics after the anchors are identified.
Evaluation To evaluate the results, we employ several metrics. First, coherence (Coh) is used
to
P measure the single-topic quality. For a set of words V, the coherence is defined as Coh =
freq(v1 ,v2 )+/freq(v2 )) , where v and v denote the indices of two words in the vocab1
2
v1 ,v2 ?V log (
ulary, freq(v2 ) and freq(v1 , v2 ) denote the numbers of documents in which v1 appears and v1 and v2
co-occur, respectively, and = 0.01 is used to prevent taking log of zero. Coherence is considered
well-aligned to human judgment when evaluating a single topic ? a higher coherence score means
better quality of a mined topic. However, coherence does not evaluate the relationship between
different mined topics; e.g., if the mined F topics are identical, the coherence score can still be high
but meaningless. To alleviate this, we also use the similarity count (SimCount) that was adopted in
[10] ? for each topic, the similarity count is obtained simply by adding up the overlapped words of
the topics within the leading N words, and a smaller SimCount means the mined topics are more
distinguishable. When the topics are very correlated (but different), the leading words of the topics
may overlap with each other, and thus using SimCount might still not be enough to evaluate the
results. We also include clustering accuracy (ClustAcc), obtained by using the mined C ? matrix
to estimate the weights W of the documents, and applying k-means to W . Since the ground-truth
labels of TDT2 and Reuters are known, clustering accuracy can be calculated, and it serves as a good
indicator of topic mining results.
Table 1 shows the experiment results on the TDT2 corpus. From F = 3 to 25, the proposed algorithm
(AnchorFree) gives very promising results: for the three considered metrics, AnchorFree consistently
gives better results compared to the baselines. Particularly, the ClustAcc?s obtained by AnchorFree
are at least 30% higher compared to the baselines for all cases. In addition, the single-topic quality of
the topics mined by AnchorFree is the highest in terms of coherence scores; the overlaps between
topics are the smallest except for F = 20 and 25.
Table 2 shows the results on the Reuters-21578 corpus. In this experiment, we can see that XRAY is
best in terms of single-topic quality, while AnchorFree is second best when F > 6. For SimCount,
AnchorFree gives the lowest values when F > 6. In terms of clustering accuracy, the topics obtained
by AnchorFree again lead to much higher clustering accuracies in all cases.
In terms of the runtime performance, one can see from Fig. 2(a) that FastAnchor, SNPA, XRAY and
AnchorFree perform similarly on the TDT2 dataset. SPA is the fastest algorithm since it has a recursive
update [6]. The SNPA and XRAY both perform nonnegative least squares-based deflation, which is
computationally heavy when the vocabulary size is large, as mentioned in Remark 1. AnchorFree
uses AO and small-scale linear programming, which is conceptually more difficult compared to SNPA
and XRAY. However, since the linear programs involved only have F variables and the number of AO
iterations is usually small (smaller than 5 in practice), the runtime performance is quite satisfactory
7
10 3
10 2
10 2
Runtime (sec.)
Runtime (sec.)
10 3
10 1
10 0
FastAnchor
SPA
SNPA
XRAY
AnchorFree
5
10
15
20
10 1
10 0
25
F
FastAnchor
SPA
SNPA
XRAY
AnchorFree
5
10
15
20
25
F
(a) TDT2
(b) Reuters-21578
Figure 2: Runtime performance of the algorithms under various settings.
Table 3: Twenty leading words of mined topics from an F = 5 case of the TDT2 experiment.
FastAnchor
AnchorFree
anchor
anchor
predicts
slipping
cleansing
strangled
tenday
allegations
poll
columbia
gm
bulls
lewinsky
gm
shuttle
bulls
jonesboro
lewinsky
cnnusa
shuttle
motors
jazz
monica
motors
space
jazz
arkansas
clinton
gallup
space
plants
nba
starr
plants
columbia
nba
school
lady
allegations
crew
workers
utah
grand
flint
astronauts
chicago
shooting
white
clinton
astronauts
michigan
finals
white
workers
nasa
game
boys
hillary
presidents
nasa
flint
game
jury
michigan
crew
utah
teacher
monica
rating
experiments
strikes
chicago
house
auto
experiments
finals
students
starr
lewinsky
mission
auto
jordan
clinton
plant
rats
jordan
westside
house
president
stories
plant
series
counsel
strikes
mission
malone
middle
husband
approval
fix
strike
malone
intern
gms
nervous
michael
11year
dissipate
starr
repair
gms
michael
independent
strike
brain
series
fire
president
white
rats
idled
championship
president
union
aboard
championship
girls
intern
monica
unit
production
tonight
investigation
idled
system
karl
mitchell
affair
house
aboard
walkouts
lakers
affair
assembly
weightlessness
pippen
shootings
infidelity
hurting
brain
north
win
lewinskys
production
earth
basketball
suspects
grand
slipping
system
union
karl
relationship
north
mice
win
funerals
jury
americans
broken
assembly
lewinsky
sexual
shut
animals
night
children
sexual
public
nervous
talks
games
ken
talks
fish
sixth
killed
justice
sexual
cleansing
shut
basketball
former
autoworkers neurological
games
13year
obstruction
affair
dioxide
striking
night
starrs
walkouts
seven
title
johnson
and is close to those of SNPA and XRAY which are greedy algorithms. The runtime performance
on the Reuters dataset is shown in Fig. 2(b), where one can see that the deflation-based methods are
faster. The reason is that the vocabulary size of the Reuters corpus is much smaller compared to that
of the TDT2 corpus (18,933 v.s. 36,771).
Table 3 shows the leading words of the mined topics by FastAnchor and AnchorFree from an F = 5
case using the TDT2 corpus. We only present the result of FastAnchor since it gives qualitatively
the best benchmark ? the complete result given by all baselines can be found in the supplementary
material. We see that the topics given by AnchorFree show clear diversity: Lewinsky scandal,
General Motors strike, Space Shuttle Columbia, 1997 NBA finals, and a school shooting in Jonesboro,
Arkansas. FastAnchor, on the other hand, exhibit great overlap on the first and the second mined
topics. Lewinsky also shows up in the fifth topic mined by FastAnchor, which is mainly about the
1997 NBA finals. This showcases the clear advantage of our proposed criterion in terms of giving
more meaningful and interpretable results, compared to the anchor-word based approaches.
5
Conclusion
In this paper, we considered identifiable anchor-free correlated topic modeling. A topic estimation
criterion based on the word-word co-occurrence/correlation matrix was proposed and its identifiability
conditions were proven. The proposed approach features topic identifiability guarantee under much
milder conditions compared to the anchor-word assumption, and thus exhibits better robustness to
model mismatch. A simple procedure that only involves one eigen-decomposition and a few small
linear programs was proposed to deal with the formulated criterion. Experiments on real text corpus
data showcased the effectiveness of the proposed approach.
Acknowledgment
This work is supported in part by the National Science Foundation (NSF) under the project numbers
NSF-ECCS 1608961 and NSF IIS-1247632 and in part by the Digital Technology Initiative (DTI)
Seed Grant, University of Minnesota.
8
References
[1] D. M. Blei. Probabilistic topic models. Communications of the ACM, 55(4):77?84, 2012.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[3] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101(suppl 1):5228?5235, 2004.
[4] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization?provably. In
ACM symposium on Theory of Computing, pages 145?162. ACM, 2012.
[5] B. Recht, C. Re, J. Tropp, and V. Bittorf. Factoring nonnegative matrices with linear programs. In Proc.
NIPS 2012, pages 1214?1222, 2012.
[6] N. Gillis and S.A. Vavasis. Fast and robust recursive algorithms for separable nonnegative matrix factorization. IEEE Trans. Pattern Anal. Mach. Intell., 36(4):698?714, April 2014.
[7] N. Gillis. Robustness analysis of hottopixx, a linear programming model for factoring nonnegative
matrices. SIAM Journal on Matrix Analysis and Applications, 34(3):1189?1212, 2013.
[8] A. Kumar, V. Sindhwani, and P. Kambadur. Fast conical hull algorithms for near-separable non-negative
matrix factorization. In Proc. ICML-12, 2012.
[9] S. Arora, R. Ge, and A. Moitra. Learning topic models?going beyond SVD. In Proc. FOCS 2012, pages
1?10. IEEE, 2012.
[10] S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical algorithm
for topic modeling with provable guarantees. In Proc. ICML-13, 2013.
[11] N. Gillis. Successive nonnegative projection algorithm for robust nonnegative blind source separation.
SIAM Journal on Imaging Sciences, 7(2):1420?1450, 2014.
[12] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition
into parts? In Proc. NIPS 2013, volume 16, 2003.
[13] A. Anandkumar, Y.-K. Liu, D. J. Hsu, D. P. Foster, and S. M. Kakade. A spectral algorithm for latent
Dirichlet allocation. In Proc. NIPS 2012, pages 917?925, 2012.
[14] A. Anandkumar, S. M. Kakade, D. P. Foster, Y.-K. Liu, and D. Hsu. Two SVDs suffice: Spectral
decompositions for probabilistic topic modeling and latent Dirichlet allocation. Technical report, 2012.
[15] A. Anandkumar, D. J. Hsu, M. Janzamin, and S. M. Kakade. When are overcomplete topic models
identifiable? uniqueness of tensor Tucker decompositions with structured sparsity. In Proc. NIPS 2013,
pages 1986?1994, 2013.
[16] D. Cai, X. He, and J. Han. Locally consistent concept factorization for document clustering. IEEE Trans.
Knowl. Data Eng., 23(6):902?913, 2011.
[17] K. Huang, N. Sidiropoulos, and A. Swami. Non-negative matrix factorization revisited: Uniqueness and
algorithm for symmetric decomposition. IEEE Trans. Signal Process., 62(1):211?224, 2014.
[18] K. Huang, N. D. Sidiropoulos, E. E. Papalexakis, C. Faloutsos, P. P. Talukdar, and T. M. Mitchell. Principled
neuro-functional connectivity discovery. In Proc. SIAM Conference on Data Mining (SDM), 2015.
[19] X. Fu, W.-K. Ma, K. Huang, and N. D. Sidiropoulos. Blind separation of quasi-stationary sources:
Exploiting convex geometry in covariance domain. IEEE Trans. Signal Process., 63(9):2306?2320,
May 2015.
[20] W.-K. Ma, T.-H. Chan, C.-Y. Chi, and Y. Wang. Convex analysis for non-negative blind source separation
with application in imaging. In D. P. Palomar and Y. Eldar, editors, Convex Optimization in Signal
Processing and Communications, chapter 7, pages 229?265. 2010.
[21] Wei Xu, Xin Liu, and Yihong Gong. Document clustering based on non-negative matrix factorization.
In Proceedings of the 26th annual international ACM SIGIR conference on Research and
development in informaion retrieval, pages 267?273. ACM, 2003.
9
| 6308 |@word mild:1 trial:2 determinant:5 version:3 manageable:1 polynomial:2 norm:1 middle:1 justice:1 simulation:1 tried:1 decomposition:10 eng:1 covariance:1 pick:3 thereby:1 moment:1 liu:3 contains:1 lightweight:1 score:3 series:2 document:22 suppressing:1 hottopixx:1 existing:1 surprising:1 yet:1 attracted:1 must:1 chicago:2 motor:3 remove:1 designed:1 interpretable:1 update:2 maxv:1 championship:2 stationary:1 greedy:3 devising:1 malone:2 nervous:2 shut:2 plane:1 affair:3 blei:3 revisited:1 successive:4 bittorf:1 symposium:1 initiative:1 shooting:3 focs:1 polyhedral:1 pairwise:1 indeed:2 brain:2 chi:1 approval:1 pf:1 solver:1 becomes:3 spain:1 begin:1 underlying:1 notation:1 project:1 mass:2 estimating:1 lowest:1 what:2 provided:1 suffice:1 interpreted:1 minimizes:1 developed:1 finding:2 guarantee:5 dti:1 every:2 tackle:2 runtime:6 preferable:1 scaled:1 k2:3 unit:2 hillary:1 astronaut:2 appear:2 grant:1 arguably:1 positive:3 ecc:1 engineering:1 papalexakis:1 consequence:1 mach:1 ak:2 might:1 plus:2 shaded:1 challenging:1 co:9 fastest:1 factorization:17 limited:1 range:1 minneapolis:1 unique:3 acknowledgment:1 practical:1 practice:4 recursive:2 implement:1 union:2 procedure:8 maxx:1 significantly:2 projection:5 word:68 pre:4 griffith:1 lady:1 amplify:2 close:2 cannot:1 interior:1 put:1 context:3 live:2 risk:1 applying:2 equivalent:1 maximizing:2 straightforward:1 attention:1 convex:7 sigir:1 formulate:2 simplicity:1 estimator:1 spanned:1 orthonormal:1 steyvers:1 coordinate:1 president:4 commercial:1 gm:4 palomar:1 programming:4 us:1 designing:1 trick:1 overlapped:1 satisfying:1 particularly:2 updating:1 asymmetric:1 showcase:2 cut:1 predicts:1 subproblem:1 electrical:1 wang:1 worst:1 svds:1 region:1 ensures:1 news:1 jury:2 counter:1 removed:1 highest:1 mentioned:1 principled:1 broken:1 complexity:3 halpern:1 solving:3 swami:1 triangle:2 girl:1 easily:1 various:2 polygon:1 talk:2 chapter:1 fast:3 lift:1 xfu:1 sanity:1 whose:3 quite:1 larger:2 supplementary:4 widely:1 solve:2 relax:2 otherwise:1 statistic:9 bdk:3 invested:1 final:4 seemingly:1 advantage:4 sdm:1 cai:1 propose:6 mission:2 talukdar:1 aligned:1 mixing:1 arkansas:2 academy:1 normalize:1 scalability:4 exploiting:1 convergence:1 cluster:1 requirement:1 unmixing:1 help:1 gong:1 measured:1 school:2 qt:3 eq:1 crew:2 involves:4 come:1 exhibiting:1 closely:1 drawback:1 correct:1 hull:2 human:1 material:4 violates:1 public:1 require:4 premise:1 fix:2 ao:4 really:1 alleviate:1 showcased:1 proposition:3 investigation:1 practically:1 sufficiently:16 considered:7 ground:6 great:1 seed:1 major:1 early:1 smallest:1 earth:1 uniqueness:2 favorable:1 proc:8 pmfs:3 jazz:2 estimation:3 label:1 knowl:1 amplifying:1 title:1 largest:1 tf:3 tool:1 weighted:1 minimization:3 brought:2 clearly:1 always:3 aboard:2 super:1 aim:1 shuttle:3 broader:1 conjunction:1 parafac:2 consistently:1 rank:6 likelihood:1 check:1 indicates:1 mainly:1 baseline:7 milder:3 inference:1 economy:1 factoring:3 entire:1 kc:1 relegated:1 quasi:1 going:1 interested:2 provably:1 issue:4 classification:1 dual:4 fidelity:1 eldar:1 among:1 aforementioned:1 proposes:1 animal:1 art:1 special:3 constrained:2 development:1 ng:1 sampling:1 identical:1 represents:3 kw:2 look:1 icml:2 simplex:7 report:1 serious:3 employ:3 primarily:1 few:3 randomly:1 national:3 intell:1 cleansing:2 geometry:1 consisting:1 fire:1 xray:14 detection:1 interest:1 mining:16 highly:1 evaluation:1 umn:3 mixture:1 semidefinite:2 tonight:1 fu:3 worker:2 janzamin:1 decoupled:1 unless:1 pmf:3 desired:3 circle:3 re:1 overcomplete:1 minimal:1 instance:1 column:16 modeling:16 cover:1 bull:2 cost:1 vertex:4 entry:1 subset:1 johnson:1 teacher:1 synthetic:1 combined:1 recht:1 explores:1 grand:2 siam:3 international:1 probabilistic:2 picking:1 michael:2 mouse:1 monica:3 connectivity:1 again:1 ambiguity:1 satisfied:3 moitra:3 huang:5 possibly:1 admit:2 resort:1 american:1 leading:5 bx:4 potential:1 diversity:1 summarized:1 sec:2 includes:1 coefficient:1 student:1 north:2 satisfy:1 blind:5 dissipate:1 view:1 lot:1 root:1 recover:1 parallel:2 identifiability:14 contribution:2 minimize:3 square:6 accuracy:5 efficiently:2 correspond:1 judgment:1 confounds:1 conceptually:2 identification:11 explain:2 suffers:1 cumbersome:1 husband:1 facebook:1 definition:2 sixth:1 grossly:2 frequency:4 tucker:2 involved:1 naturally:1 associated:1 proof:3 boil:1 rithms:1 stop:1 flint:2 dataset:3 hsu:3 popular:2 mitchell:2 knowledge:1 carefully:1 nasa:2 appears:2 higher:11 follow:1 wei:1 april:1 formulation:2 furthermore:2 stage:3 correlation:13 until:1 hand:1 night:2 tropp:1 propagation:2 lda:1 quality:4 scientific:1 usa:1 utah:2 normalized:2 contain:2 concept:1 counterpart:1 former:2 arg:5 alternating:1 symmetric:6 nonzero:1 satisfactory:1 freq:4 deal:4 white:4 mmax:2 mmin:3 game:4 uniquely:2 basketball:2 noted:1 starr:4 rat:2 criterion:20 prominent:1 complete:1 demonstrate:1 meaning:1 ef:1 common:1 permuted:1 functional:1 physical:1 empirically:1 conditioning:3 volume:4 interpretation:3 he:1 sidiropoulos:4 significant:1 measurement:2 hurting:1 tuning:1 similarly:1 killed:1 dot:1 minnesota:2 han:1 similarity:3 uncorrelatedness:2 posterior:1 recent:2 showed:1 perspective:1 chan:1 scenario:1 certain:1 slipping:2 additional:1 relaxed:3 nikos:1 employed:3 converge:1 maximize:1 strike:5 dashed:1 ii:4 rv:6 full:2 sound:1 signal:3 technical:1 faster:1 af:1 retrieval:2 post:2 equally:1 a1:1 scalable:2 variant:2 neuro:1 essentially:3 metric:3 iteration:2 normalization:9 suppl:1 background:1 addition:2 grow:1 singular:1 source:5 biased:1 meaningless:1 unlike:3 strict:1 tri:1 subject:6 suspect:1 effectiveness:3 jordan:3 anandkumar:3 unitary:1 near:1 easy:2 destroyed:1 enough:2 gillis:3 affect:1 identified:7 inner:1 idea:1 det:11 yihong:1 politics:1 whether:1 motivated:1 handled:1 effort:2 sontag:1 remark:3 useful:1 stodden:1 clear:3 involve:1 obstruction:1 locally:1 category:4 simplest:1 ken:1 vavasis:1 exist:1 nsf:3 notice:2 fish:1 estimated:1 blue:1 key:2 salient:1 reformulation:1 poll:1 changing:1 prevent:1 ce:3 kept:1 ncw:2 resorted:1 imaging:2 destroy:1 v1:7 graph:1 merely:1 tweet:1 year:4 convert:1 cone:21 inverse:1 geometrically:1 scandal:1 striking:1 reasonable:1 wu:1 separation:5 draw:1 coherence:8 vf:2 spa:15 nba:4 guaranteed:2 followed:1 mined:10 conical:1 identifiable:10 nonnegative:14 g:1 annual:1 occur:2 constraint:3 idf:3 software:1 argument:1 min:1 kumar:1 separable:6 relatively:1 coh:4 department:1 structured:1 kd:1 belonging:1 smaller:5 separability:6 suppressed:1 appealing:2 kakade:3 lewinsky:6 repair:1 computationally:4 dioxide:1 previously:1 turn:1 count:3 deflation:8 r3:1 needed:1 ge:3 end:3 serf:1 adopted:1 pursuit:2 available:2 apply:4 v2:6 spectral:3 nicholas:1 occurrence:6 appearing:3 robustness:5 faloutsos:1 eigen:3 original:2 denotes:4 clustering:7 dirichlet:4 ensure:1 cf:2 graphical:1 include:1 hinge:1 marginalized:1 assembly:2 giving:1 restrictive:4 k1:5 seeking:1 tensor:5 objective:2 question:1 already:1 damage:1 exhibit:3 minx:1 kth:1 win:2 cw:1 modapte:1 topic:91 seven:1 trivial:2 reason:1 kannan:1 provable:1 modeled:1 index:2 tdt2:13 illustration:1 minimizing:1 balance:1 relationship:2 kambadur:1 difficult:2 boy:1 negative:9 design:1 enclosing:1 anal:1 summarization:1 twenty:1 contributed:1 perform:2 observation:1 datasets:1 benchmark:1 implementable:1 nist:1 descent:1 orthant:4 incorrectly:1 flop:1 communication:2 arbitrary:1 community:2 nmf:10 rating:1 namely:3 pair:1 security:1 established:1 barcelona:1 nip:5 trans:4 informaion:1 beyond:2 suggested:1 usually:5 exemplified:1 dynamical:1 mismatch:1 pattern:1 appeared:1 sparsity:5 program:6 rf:4 reliable:3 unrealistic:1 critical:1 overlap:4 natural:2 rely:1 restore:1 difficulty:1 solvable:2 indicator:1 mn:1 representing:1 zhu:1 technology:1 identifies:1 arora:5 conic:1 auto:2 columbia:3 text:7 prior:1 literature:4 nice:2 tangent:1 kf:1 upshot:1 discovery:1 relative:1 plant:4 permutation:5 interesting:1 allocation:4 proven:1 digital:1 eigendecomposition:1 foundation:1 consistent:1 imposes:1 xiao:1 dd:1 editor:1 article:1 story:1 uncorrelated:2 foster:2 heavy:1 production:2 row:8 karl:2 repeat:1 supported:1 free:8 side:1 understand:1 taking:1 absolute:1 sparse:3 fifth:1 mimno:1 boundary:1 plain:1 vocabulary:9 evaluating:1 avoids:2 calculated:1 author:2 collection:1 commonly:1 qualitatively:1 bm:2 far:1 bb:2 emphasize:1 sequentially:1 anchor:40 corpus:18 latent:4 table:6 promising:1 robust:2 correlated:6 ca:2 symmetry:2 expansion:1 clinton:3 diag:3 domain:1 did:2 main:1 reuters:11 revisits:1 noise:8 arise:1 child:1 sliced:1 xu:1 fig:4 referred:1 scattered:18 wish:1 lie:2 kxk2:3 house:3 third:2 qqt:1 cyclically:1 companion:1 down:1 theorem:5 removing:1 cec:6 specific:1 showing:1 normalizing:1 concern:1 essential:1 exists:1 adding:1 occurring:1 kx:2 easier:1 suited:1 cx:2 michigan:2 distinguishable:1 simply:1 explore:1 intern:2 prevents:1 contained:1 tracking:1 neurological:1 recommendation:1 sindhwani:1 corresponds:1 truth:6 satisfies:1 acm:5 ma:2 viewed:1 formulated:1 consequently:1 donoho:1 twofold:1 content:1 hard:1 considerable:2 feasible:2 specifically:1 except:2 sexual:3 averaging:1 lemma:3 called:7 ece:1 svd:1 xin:1 meaningful:1 formally:1 latter:2 fulfills:1 unbalanced:1 violated:1 evaluate:3 handling:3 |
5,868 | 6,309 | A Constant-Factor Bi-Criteria Approximation
Guarantee for k-means++
Dennis Wei
IBM Research
Yorktown Heights, NY 10598, USA
[email protected]
Abstract
This paper studies the k-means++ algorithm for clustering as well as the class of D`
sampling algorithms to which k-means++ belongs. It is shown that for any constant
factor ? > 1, selecting ?k cluster centers by D` sampling yields a constant-factor
approximation to the optimal clustering with k centers, in expectation and without
conditions on the dataset. This result extends the previously known O(log k)
guarantee for the case ? = 1 to the constant-factor bi-criteria regime. It also
improves upon an existing constant-factor bi-criteria result that holds only with
constant probability.
1
Introduction
The k-means problem and its variants constitute one of the most popular paradigms for clustering
[15]. Given a set of n data points, the task is to group them into k clusters, each defined by a cluster
center, such that the sum of distances from points to cluster centers (raised to a power `) is minimized.
Optimal clustering in this sense is known to be NP-hard [11, 3, 20, 6]. In practice, the most widely
used algorithm remains Lloyd?s [19] (often referred to as the k-means algorithm), which alternates
between updating centers given cluster assignments and re-assigning points to clusters.
In this paper, we study an enhancement to Lloyd?s algorithm known as k-means++ [4] and the more
general class of D` sampling algorithms to which k-means++ belongs. These algorithms select
cluster centers randomly from the given data points with probabilities proportional to their current
costs. The clustering can then be refined using Lloyd?s algorithm. D` sampling is attractive for
two reasons: First, it is guaranteed to yield an expected O(log k) approximation to the optimal
clustering with k centers [4]. Second, it is as simple as Lloyd?s algorithm, both conceptually as well
as computationally with O(nkd) running time in d dimensions.
The particular focus of this paper is on the setting where an optimal k-clustering remains the
benchmark but more than k cluster centers can be sampled to improve the approximation. Specifically,
it is shown that for any constant factor ? > 1, if ?k centers are chosen by D` sampling, then a
constant-factor approximation to the optimal k-clustering is obtained. This guarantee holds in
expectation and for all datasets, like the one in [4], and improves upon the O(log k) factor therein.
Such a result is known as a constant-factor bi-criteria approximation since both the optimal cost and
the relevant degrees of freedom (k in this case) are exceeded but only by constant factors.
In the context of clustering, bi-criteria approximation guarantees can be valuable because an appropriate number of clusters k is almost never known or pre-specified in practice. Approaches to
determining k from the data are ideally based on knowing how the optimal cost decreases as k
increases, but obtaining this optimal trade-off between cost and k is NP-hard as mentioned earlier.
Alternatively, a simpler algorithm (like k-means++) that has a constant-factor bi-criteria guarantee
would ensure that the trade-off curve generated by this algorithm deviates by no more than constant
factors along both axes from the optimal curve. This may be more appealing than a deviation along
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the cost axis that grows as O(log k). Furthermore, if a solution with a specified number of clusters k
is truly required, then linear programming techniques can be used to select a k-subset from the ?k
cluster centers while still maintaining a constant-factor approximation [1, 8].
The next section reviews existing work on D` sampling and other clustering approximations. Section 2
formally states the problem, the D` sampling algorithm, and existing lemmas regarding the algorithm.
Section 3 states the main results and compares them to previous results. Proofs are presented in
Section 4 with more algebraic proofs deferred to the supplementary material.
1.1
Related Work
Approximation algorithms for k-means (` = 2), k-medians (` = 1), and related problems span a
wide range in the trade-off between tighter approximation factors and lower algorithm complexity.
At one end, while exact algorithms [14] and polynomial-time approximation schemes (PTAS)
(see [22, 18, 9, 12, 13, 10] and references therein) may have polynomial running times in n, the
dependence on k and/or the dimension d is exponential or worse. Simpler local search [17, 5] and
linear programming [8, 16] algorithms offer constant-factor approximations but still with high-order
polynomial running times in n, and some rely on dense discretizations of size O(n?d log(1/)).
In contrast to the above, this paper focuses on highly practical algorithms in the D` sampling class,
including k-means++. As mentioned, it was proved in [4] that D` sampling results in an O(log k)
approximation, in expectation and for all datasets. The current work extends this guarantee to the
constant-factor bi-criteria regime, also for all datasets. The authors of [4] also provided a matching
lower bound, exhibiting a dataset on which k-means++ achieves an expected ?(log k) approximation.
Improved O(1) approximation factors have been shown for sampling algorithms like k-means++
provided that the dataset satisfies certain conditions. Such results were established in [24] for kmeans++ and other variants of Lloyd?s algorithm under the condition that the dataset is well-suited
in a sense to partitioning into k clusters, and for an algorithm called successive sampling [23] with
O(n(k + log n) + k 2 log2 n) running time subject to a bound on the dispersion of the points.
In a similar direction to the one pursued in the present work, [1] showed that if the number of cluster
centers is increased to a constant factor times k, then k-means++ can achieve a constant-factor
approximation, albeit only with constant probability. An O(1) factor was also obtained independently
by [2] using more centers, of order O(k log k). It is important to note that the constant-probability
result of [1] in no way implies the main results herein, which are true in expectation and are therefore
stronger guarantees. Furthermore, Section 3.1 shows that a constant-probability corollary of Theorem
1 improves upon [1] by more than a factor of 2.
Recently, [21, 7] have also established constant-factor bi-criteria results for the k-means problem.
These works differ from the present paper in studying more complex local search and linear program2
ming algorithms applied to large discretizations, of size nO(log(1/)/ ) (a high-order polynomial)
?d
in [21] and O(n log(1/)) in [7], the latter the same as in [17]. Moreover, [7] employs search
neighborhoods that are also of exponential size in d (requiring doubly exponential running time).
2
2.1
Preliminaries
Problem Definition
We are given n points x1 , . . . , xn in a real metric space X with metric D(x, y). The objective is to
choose t cluster centers c1 , . . . , ct in X and assign points to the nearest cluster center to minimize the
potential function
n
X
?=
min D(xi , cj )` .
(1)
i=1
j=1,...,t
A cluster is thus defined by the points xi assigned to a center cj ,P
where ties (multiple closest centers)
are broken arbitrarily. For a subset of points S, define ?(S) = xi ?S minj=1,...,t D(xi , cj )` to be
the contribution to the potential from S; ?(xi ) is the contribution from a single point xi .
The exponent ` ? 1 in (1) is regarded as a problem parameter. Letting ` = 2 and D be Euclidean
distance, we have what is usually known as the k-means problem, so-called because the optimal
2
Algorithm 1 D` Sampling
Input: Data points x1 , . . . , xn , number of clusters t. Initialize ?(xi ) = 1 for i = 1, . . . , n.
for j = 1 to t do
Select jth center cj = xi with probability ?(xi )/?.
Update ?(xi ) for i = 1, . . . , n.
cluster centers are means of the points assigned to them. The choice ` = 1 is also popular and
corresponds to the k-medians problem.
Throughout this paper, an optimal clustering will always refer to one that minimizes (1) over solutions
with t = k clusters, where k ? 2 is given. Likewise, the term optimal cluster and symbol A will refer
to one of the k clusters from this optimal solution. The goal is to approximate the potential ?? of this
optimal k-clustering using t = ?k cluster centers for ? ? 1.
2.2
D` Sampling Algorithm
The D` sampling algorithm chooses cluster centers randomly from x1 , . . . , xn with probabilities
proportional to their current contributions to the potential, as detailed in Algorithm 1. Following [4],
the case ` = 2 is referred to as the k-means++ algorithm and the non-uniform probabilities used after
the first iteration are referred to as D2 weighting (hence D` in general). For t cluster centers, the
running time of D` sampling is O(ntd) in d dimensions.
In practice, Algorithm 1 is used as an initialization to Lloyd?s algorithm, which usually produces
further decreases in the potential. The analysis herein pertains only to Algorithm 1 and not to the
subsequent improvement due to Lloyd?s algorithm.
2.3
Existing Lemmas Regarding D` Sampling
The following lemmas synthesize useful results from [4] that bound the expected potential within a
single optimal cluster due to selecting a center from that cluster with uniform or D` weighting.
Lemma 1. [4, Lemmas 3.1 and 5.1] Given an optimal cluster A, let ? be the potential resulting from
(`)
selecting a first cluster center randomly from A with uniform weighting. Then E[?(A)] ? ru ?? (A)
for any A, where
2, ` = 2 and D is Euclidean,
ru(`) =
2` , otherwise.
Lemma 2. [4, Lemma 3.2] Given an optimal cluster A and an initial potential ?, let ?0 be the
potential resulting from adding a cluster center selected randomly from A with D` weighting. Then
(`)
(`)
(`)
E[?0 (A)] ? rD ?? (A) for any A, where rD = 2` ru .
(`)
(`)
The factor of 2` between ru and rD for general ` is explained just before Theorem 5.1 in [4].
3
Main Results
(`)
The main results of this paper are stated below in terms of the single-cluster approximation ratio rD
defined by Lemma 2. Subsequently in Section 3.1, the results are discussed in the context of previous
work.
Theorem 1. Let ? be the potential resulting from selecting ?k cluster centers according to Algorithm 1, where ? ? 1. The expected approximation ratio is then bounded as
E[?]
?(k ? 2)
1
(`)
? rD 1 + min
, Hk?1
??
,
?
?
(? ? 1)k + ?
n
?
.
where ? = (1 + 5)/2 = 1.618 is the golden ratio and Hk = 1 + 12 + ? ? ? + k1 ? log k is the kth
harmonic number.
In the proof of Theorem 1 in Section 4.2, it is shown that the 1/n term is indeed non-positive and can
therefore be omitted, with negligible loss for large n.
3
The approximation ratio bound in Theorem 1 is stated as a function of k. The following corollary
confirms that the theorem also implies a constant-factor bi-criteria approximation.
Corollary 1. With the same definitions as in Theorem 1, the expected approximation ratio is bounded
as
E[?]
?
(`)
? rD 1 +
.
??
??1
Proof. The minimum in Theorem 1 is bounded by its first term. This term is in turn increasing in k
with asymptote ?/(? ? 1), which can therefore be taken as a k-independent bound.
It follows from Corollary 1 that a constant ?oversampling? ratio ? > 1 leads to a constant-factor
approximation. Theorem 1 offers a further refinement for finite k.
The bounds in Theorem 1 and Corollary 1 consist of two factors. As ? increases, the second,
parenthesized factor decreases to 1 either exactly or approximately as 1/(? ? 1). The first factor
(`)
of rD however is no smaller than 4, and is a direct consequence of Lemma 2. Any future work on
improving Lemma 2 would therefore strengthen the approximation factors above.
3.1
Comparisons to Existing Results
A comparison of Theorem 1 to results in [4] is implicit in its statement since the Hk?1 term in the
minimum comes directly from [4, Theorems 3.1 and 5.1]. For k = 2, 3, the first term in the minimum
is smaller than Hk?1 for any ? ? 1, and hence Theorem 1 is always an improvement. For k > 3,
Theorem 1 improves upon [4] for ? greater than the critical value
?c = 1 +
?(k ? 2 ? Hk?1 )
.
kHk?1
Numerical evaluation of ?c shows that it reaches a maximum value of 1.204 at k = 22 and then
decreases back toward 1 roughly as 1/Hk?1 . It can be concluded that for any k, at most 20%
oversampling is required for Theorem 1 to guarantee a better approximation than [4].
The most closely related result to Theorem 1 and Corollary 1 is found in [1, Theorem 1]. The latter
establishes a constant-factor bi-criteria approximation that holds only with constant probability, as
opposed to in expectation. Since a bound on the expectation implies a bound with constant probability
via Markov?s inequality (but not the other way
? around), a direct comparison with [1] is possible.
Specifically, for ` = 2 and the t = d16(k + k)e cluster centers assumed in [1], Theorem 1 in the
present work implies that
E[?]
?(k ? 2)
?
?
,
H
?
8
1
+
min
?
8
1
+
,
k?1
??
15
d15k + 16 ke + ?
after taking k ? ?. Then by Markov?s inequality,
8
? .
?
1+
= 9.137
?
?
?
0.97
15
with probability at least 1 ? 0.97 = 0.03 as in [1]. This 9.137 approximation factor is less than half
the factor of 20 in [1].
Corollary 1 may also be compared to the results in [21], which are obtained through more complex
2
algorithms applied to a large discretization, of size nO(log(1/)/ ) for reasonably small . The main
(`)
difference between Corollary 1 and the bounds in [21] is the extra factor of rD . As discussed above,
this factor is due to Lemma 2 and is unlikely to be intrinsic to the D` sampling algorithm.
4
Proofs
The overall strategy used to prove Theorem 1 is similar to that in [4]. The key intermediate result is
Lemma 3 below, which relates the potential at a later iteration in Algorithm 1 to the potential at an
earlier iteration. Section 4.1 is devoted to proving Lemma 3. Subsequently in Section 4.2, Theorem 1
is proven by an application of Lemma 3.
4
In the sequel, we say that an optimal cluster A is covered by a set of cluster centers if at least one of
(`)
the centers lies in A. Otherwise A is uncovered. Also define ? = rD ?? as an abbreviation.
Lemma 3. For an initial set of centers leaving u optimal clusters uncovered, let ? denote the
potential, U the union of uncovered clusters, and V the union of covered clusters. Let ?0 denote
the potential resulting from adding t ? u centers, each selected randomly with D` weighting as in
Algorithm 1. Then the new potential is bounded in expectation as
E[?0 | ?] ? cV (t, u)?(V) + cU (t, u)?(U)
for coefficients cV (t, u) and cU (t, u) that depend only on t, u. This holds in particular for
t + au + b
(a + 1)u
=1+
,
t?u+b
t?u+b
c (t ? 1, u ? 1), u > 0,
cU (t, u) = V
0,
u = 0,
cV (t, u) =
(2a)
(2b)
where the parameters a and b satisfy a + 1 ? b > 0 and ab ? 1. The choice of a, b that minimizes
cV (t, u) in (2a) is a + 1 = b = ?.
4.1
Proof of Lemma 3
Lemma 3 is proven using induction, showing that if it holds for (t, u) and (t, u + 1), then it also
holds for (t + 1, u + 1), similar to the proof of [4, Lemma 3.3]. The proof is organized into three
parts. Section 4.1.1 provides base cases. In Section 4.1.2, sufficient conditions on the coefficients
cV (t, u), cU (t, u) are derived that allow the inductive step to be completed. In Section 4.1.3, it is
shown that the closed-form expressions in (2) are consistent with the base cases in Section 4.1.1 and
satisfy the sufficient conditions from Section 4.1.2, thus completing the proof.
4.1.1
Base cases
This subsection exhibits two base cases of Lemma 3. The first case corresponds to u = 0, for which
we have ?(V) = ?. Since adding centers cannot increase the potential, i.e. ?0 ? ? deterministically,
Lemma 3 holds with
cV (t, 0) = 1, cU (t, 0) = 0, t ? 0.
(3)
The second base case occurs for t = u, u ? 1. For this purpose, a slightly strengthened version of [4,
Lemma 3.3] is used, as given next.
Lemma 4. With the same definitions as in Lemma 3 except with t ? u, we have
u?t
?(U),
E[?0 | ?] ? (1 + Ht )?(V) + (1 + Ht?1 )?(U) +
u
where we define H0 = 0 and H?1 = ?1 for convenience.
The improvement is in the coefficient in front of ?(U), from (1 + Ht ) to (1 + Ht?1 ). The proof
follows that of [4, Lemma 3.3] with some differences and is deferred to the supplementary material.
Specializing to the case t = u, Lemma 4 coincides with Lemma 3 with coefficients
cV (u, u) = 1 + Hu ,
4.1.2
cU (u, u) = 1 + Hu?1 .
(4)
Sufficient conditions on coefficients
We now assume inductively that Lemma 3 holds for (t, u) and (t, u + 1). The induction to the case
(t + 1, u + 1) is then completed under the following sufficient conditions on the coefficients:
cV (t, u + 1) ? 1,
2
(5a)
2
(cV (t, u + 1) ? cU (t, u + 1))cV (t, u) ? (cU (t, u + 1) ? cV (t, u)) ,
(5b)
and
1/2 i
1h
cV (t, u) + cV (t, u)2 + 4 max{cV (t, u + 1) ? cV (t, u), 0}
,
2
cU (t + 1, u + 1) ? cV (t, u).
cV (t + 1, u + 1) ?
5
(6a)
(6b)
The first pair of conditions (5) applies to the coefficients involved in the inductive hypothesis for (t, u)
and (t, u + 1). The second pair (6) can be seen as a recursive specification of the new coefficients
for (t + 1, u + 1). This inductive step together with base cases (3) and (4) are sufficient to extend
Lemma 3 to all t > u, starting with (t, u) = (1, 0) and (t, u + 1) = (1, 1).
The inductive step is broken down into a series of three lemmas, each building upon the last. The first
lemma applies the inductive hypothesis to derive a bound on the potential that depends not only on
?(V) and ?(U) but also on ?(U).
Lemma 5. Assume that Lemma 3 holds for (t, u) and (t, u + 1). Then for the case (t + 1, u + 1),
i.e. ? corresponding to u + 1 uncovered clusters and ?0 resulting after adding t + 1 centers,
cV (t, u)?(U) + cV (t, u + 1)?(V)
E[?0 | ?] ? min
?(V)
?(U) + ?(V)
cV (t, u)?(U) + cU (t, u + 1)?(V)
+
?(U), ?(U) + ?(V) .
?(U) + ?(V)
Proof. We consider the two cases in which the first of the t + 1 new centers is chosen from either the
covered set V or the uncovered set U. Denote by ?1 the potential after adding the first new center.
Covered case: This case occurs with probability ?(V)/? and leaves the covered and uncovered sets
unchanged. We then invoke Lemma 3 with (t, u + 1) (one fewer center to add) and ?1 playing the
role of ?. The contribution to E[?0 | ?] from this case is then bounded by
?(V)
?(V)
cV (t, u + 1)?1 (V) + cU (t, u + 1)?(U) ?
(cV (t, u + 1)?(V) + cU (t, u + 1)?(U)) ,
?
?
(7)
noting that ?1 (S) ? ?(S) for any set S.
Uncovered case: We consider each uncovered cluster A ? U separately. With probability ?(A)/?,
the first new center is selected from A, moving A from the uncovered to the covered set and reducing
the number of uncovered clusters by one. Applying Lemma 3 for (t, u), the contribution to E[?0 | ?]
is bounded by
?(A)
cV (t, u) ?1 (V) + ?1 (A) + cU (t, u)(?(U) ? ?(A)) .
?
Taking the expectation with respect to possible centers in A and using Lemma 2 and ?1 (V) ? ?(V),
we obtain the further bound
?(A)
[cV (t, u)(?(V) + ?(A)) + cU (t, u)(?(U) ? ?(A))] .
?
Summing over A ? U yields
?(U)
cV (t, u) ? cU (t, u) X
(cV (t, u)?(V) + cU (t, u)?(U)) +
?(A)?(A)
?
?
A?U
?(U)
cV (t, u)(?(V) + ?(U)),
?
P
using the inner product bound A?U ?(A)?(A) ? ?(U)?(U).
?
(8)
The result follows from summing (7) and (8) and combining with the trivial bound E[?0 | ?] ? ? =
?(U) + ?(V).
The bound in Lemma 5 depends on ?(U), the potential over uncovered clusters, which can be
arbitrarily large or small. In the next lemma, ?(U) is eliminated by maximizing with respect to it.
Lemma 6. Assume that Lemma 3 holds for (t, u) and (t, u + 1) with cV (t, u + 1) ? 1. Then for the
case (t + 1, u + 1) in the sense of Lemma 5,
n
p o
1
1
E[?0 | ?] ? cV (t, u)(?(V) + ?(U)) + max cV (t, u)(?(V) + ?(U)), Q ,
2
2
6
where
Q = cV (t, u)2 ? 4cV (t, u) + 4cV (t, u + 1) ?(V)2
+ 2 cV (t, u)2 ? 2cV (t, u) + 2cU (t, u + 1) ?(V)?(U) + cV (t, u)2 ?(U)2 .
Proof. Let B1 (?(U)) and B2 (?(U)) denote the two terms inside the minimum in Lemma 5 (i.e.
B2 (?(U)) = ?(U) + ?(V)). The derivative of B1 (?(U)) with respect to ?(U) is given by
B10 (?(U)) =
?(V)
(cV (t, u) ? cV (t, u + 1))?(V) + (cV (t, u) ? cU (t, u + 1))?(U) ,
2
(?(U) + ?(V))
which does not change sign as a function of ?(U). The two cases B10 (?(U)) ? 0 and B10 (?(U)) < 0
are considered separately below. Taking the maximum of the resulting bounds (9), (10) establishes
the lemma.
Case B10 (?(U)) ? 0: Both B1 (?(U)) and B2 (?(U)) are non-decreasing functions of ?(U). The
former has the finite supremum
cV (t, u)(?(V) + ?(U)),
(9)
whereas the latter increases without bound. Therefore B1 (?(U)) eventually becomes the smaller of
the two and (9) can be taken as an upper bound on min{B1 (?(U)), B2 (?(U))}.
Case B10 (?(U)) < 0: At ?(U) = 0, we have B1 (0) = cV (t, u + 1)?(V) + cU (t, u + 1)?(U) and
B2 (0) = ?(V). The assumption cV (t, u + 1) ? 1 implies that B1 (0) ? B2 (0). Since B1 (?(U))
is now a decreasing function, the two functions must intersect and the point of intersection then
provides an upper bound on min{B1 (?(U)), B2 (?(U))}. The supplementary material provides some
algebraic details on solving for the intersection. The resulting bound is
1
1p
Q.
cV (t, u)(?(V) + ?(U)) +
2
2
(10)
The bound in Lemma 6 is a nonlinear function of ?(V) and ?(U), in contrast to the desired form in
Lemma 3. The next step is to linearize the bound by imposing additional conditions (5).
Lemma 7. Assume that Lemma 3 holds for (t, u) and (t, u + 1) with coefficients satisfying (5). Then
for the case (t + 1, u + 1) in the sense of Lemma 5,
1/2 i
1h
E[?0 | ?] ?
cV (t, u) + cV (t, u)2 + 4 max{cV (t, u + 1) ? cV (t, u), 0}
?(V) + cV (t, u)?(U).
2
?
Proof. It suffices to linearize the Q term in Lemma 6, specifically by showing that Q ? (a?(V) +
1/2
b?(U))2 for all ?(V), ?(U) with a = cV (t, u)2 + 4(cV (t, u + 1) ? cV (t, u))
and b = cV (t, u).
Proof of this inequality is provided in the supplementary material. Incorporating the inequality into
Lemma 6 proves the result.
Given conditions (5) and Lemma 7, the inductive step for Lemma 3 can be completed by defining
cV (t + 1, u + 1) and cU (t + 1, u + 1) recursively as in (6).
4.1.3
Proof with specific form for coefficients
We now prove that Lemma 3 holds for coefficients cV (t, u), cU (t, u) given by (2) with a + 1 ? b > 0
and ab ? 1. Given the inductive approach and the results established in Sections 4.1.1 and 4.1.2,
the proof requires the remaining steps below. First, it is shown that the base cases (3), (4) from
Section 4.1.1 imply that Lemma 3 is true for the same base cases but with cV (t, u), cU (t, u) given
by (2) instead. Second, (2) is shown to satisfy conditions (5) for all t > u, thus permitting Lemma
7 to be used. Third, (2) is also shown to satisfy (6), which combined with Lemma 7 completes the
induction.
7
Considering the base cases, for u = 0, (3) and (2) coincide so there is nothing to prove. For the case
t = u, u ? 1, Lemma 3 with coefficients given by (4) implies the same with coefficients given by (2)
provided that
(a + 1)u
(a + 1)(u ? 1)
(1 + Hu )?(V) + (1 + Hu?1 )?(U) ? 1 +
?(V) + 1 +
?(U)
b
b
for all ?(V), ?(U). This in turn is ensured if the coefficients satisfy Hu ? (a + 1)u/b for all u ? 1.
The most stringent case is u = 1 and corresponds to the assumption a + 1 ? b.
For the second step of establishing (5), it is clear that (5a) is satisfied by (2a). A direct calculation
presented in the supplementary material shows that (5b) is also true.
Lemma 8. Condition (5b) is satisfied for all t > u if cV (t, u), cU (t, u) are given by (2) and ab ? 1.
Similarly for the third step, it suffices to show that (2a) satisfies recursion (6a) since (2b) automatically
satisfies (6b). A proof is provided in the supplementary material.
Lemma 9. Recursion (6a) is satisfied for all t > u if cV (t, u) is given by (2a) and ab ? 1.
Lastly, we minimize cV (t, u) in (2a) with respect to a, b, subject to a + 1 ? b > 0 and ab ? 1. For
fixed a, minimizing with respect to b yields b = a + 1 and cV (t, u) = 1 + ((a + 1)u)/(t ? u + a + 1).
Minimizing with respect to a then results in setting ab = a(a + 1) = 1. The solution satisfying
a + 1 > 0 is a = ? ? 1 and b = ?.
4.2
Proof of Theorem 1
Denote by nA the number of points in optimal cluster A. In the first iteration of Algorithm 1, the first
cluster center is selected from some A with probability nA /n. Conditioned on this event, Lemma 3
is applied with covered set V = A, u = k ? 1 uncovered clusters, and t = ?k ? 1 remaining cluster
centers. This bounds the final potential ?0 as
E[?0 | ?] ? cV (?k ? 1, k ? 1)?(A) + cU (?k ? 1, k ? 1)(? ? ?(A))
where cV (t, u), cU (t, u) are given by (2) with a + 1 = b = ?. Taking the expectation over possible
centers in A and using Lemma 1,
E[?0 | A] ? ru(`) cV (?k ? 1, k ? 1)?? (A) + cU (?k ? 1, k ? 1)(? ? ?(A)).
(`)
Taking the expectation over clusters A and recalling that ? = rD ?? ,
X nA
(`)
E[?0 ] ? rD cU (?k ? 1, k ? 1)?? ? C
?? (A),
n
(11)
A
(`)
(`)
(`)
(`)
where C = rD cU (?k ? 1, k ? 1) ? ru cV (?k ? 1, k ? 1). Using (2) and rD = 2` ru from Lemma
2,
2` ((? ? 1)k + ?(k ? 1)) ? (? ? 1 + ?)k
C = ru(`)
(? ? 1)k + ?
`
(2 ? 1)(? ? 1)k + ?((2` ? 1)(k ? 1) ? 1)
= ru(`)
.
(? ? 1)k + ?
The last expression for C is seen to be non-negative for ? ? 1, k ? 2, and ` ? 1. Furthermore, since
nA = 1 (a singleton cluster) implies that ?? (A) = 0, we have
X
X
nA ?? (A) ? 2?? .
(12)
nA ?? (A) =
A
A:nA ?2
Substituting (2) and (12) into (11), we obtain
E[?0 ]
2C
?(k ? 2)
(`)
? rD 1 +
?
.
??
(? ? 1)k + ?
n
(13)
The last step is to recall [4, Theorems 3.1 and 5.1], which together state that
E[?0 ]
(`)
? rD (1 + Hk?1 )
(14)
??
for ?0 resulting from selecting exactly k cluster centers. In fact, (14) also holds for ?k centers, ? ? 1,
since adding centers cannot increase the potential. The proof is completed by taking the minimum of
(13) and (14).
8
References
[1] A. Aggarwal, A. Deshpande, and R. Kannan. Adaptive sampling for k-means clustering. In Proc. 12th Int.
Workshop and 13th Int. Workshop on Approximation, Randomization, and Combinatorial Optimization.
Algorithms and Techniques, pages 15?28, August 2009.
[2] N. Ailon, R. Jaiswal, and C. Monteleoni. Streaming k-means approximation. In Adv. Neural Information
Processing Systems 22, pages 10?18, December 2009.
[3] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. NP-hardness of Euclidean sum-of-squares clustering.
Mach. Learn., 75(2):245?248, May 2009.
[4] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proc. 18th ACM-SIAM
Symp. Discrete Algorithms, pages 1027?1035, January 2007.
[5] V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit. Local search heuristics for
k-median and facility location problems. SIAM J. Comput., 33(3):544?562, March 2004.
[6] P. Awasthi, M. Charikar, R. Krishnaswamy, and A. K. Sinop. The hardness of approximation of Euclidean
k-means. In Proc. 31st Int. Symp. Computational Geometry, pages 754?767, June 2015.
[7] S. Bandyapadhyay and K. Varadarajan.
arXiv:1512.02985, December 2015.
On variants of k-means clustering.
Technical Report
[8] M. Charikar, S. Guha, E. Tardos, and D. B. Shmoys. A constant-factor approximation algorithm for the
k-median problem. J. Comput. Syst. Sci., 65(1):129?149, August 2002.
[9] K. Chen. On coresets for k-median and k-means clustering in metric and Euclidean spaces and their
applications. SIAM J. Comput., 39(3):923?947, September 2009.
[10] V. Cohen-Addad, P. N. Klein, and C. Mathieu. Local search yields approximation schemes for k-means
and k-median in Euclidean and minor-free metrics. Technical Report arXiv:1603.09535, March 2016.
[11] S. Dasgupta. The hardness of k-means clustering. Technical Report CS2008-0916, Department of
Computer Science and Engineering, University of California, San Diego, 2008.
[12] D. Feldman, M. Monemizadeh, and C. Sohler. A PTAS for k-means clustering based on weak coresets. In
Proc. 23rd Int. Symp. Computational Geometry, pages 11?18, June 2007.
[13] Z. Friggstad, M. Rezapour, and M. R. Salavatipour. Local search yields a PTAS for k-means in doubling
metrics. Technical Report arXiv:1603.08976, March 2016.
[14] M. Inaba, N. Katoh, and H. Imai. Applications of weighted Voronoi diagrams and randomization to
variance-based k-clustering. In Proc. 10th Int. Symp. Computational Geometry, pages 332?339, 1994.
[15] A. K. Jain. Data clustering: 50 years beyond k-means. Pattern Recogn. Lett., 31(8):651?666, June 2010.
[16] K. Jain and V. V. Vazirani. Approximation algorithms for metric facility location and k-median problems
using the primal-dual schema and Lagrangian relaxation. J. ACM, 48(2):274?296, March 2001.
[17] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. A local search
approximation algorithm for k-means clustering. Comput. Geom., 28(2?3):89?112, June 2004.
[18] A. Kumar, Y. Sabharwal, and S. Sen. Linear-time approximation schemes for clustering problems in any
dimensions. J. ACM, 57(2):5:1?5:32, January 2010.
[19] S. Lloyd. Least squares quantization in PCM. Technical report, Bell Laboratories, 1957.
[20] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is NP-hard. In Proc. 3rd
Int. Workshop Algorithms and Computation, pages 274?285, February 2009.
[21] K. Makarychev, Y. Makarychev, M. Sviridenko, and J. Ward. A bi-criteria approximation algorithm for k
means. Technical Report arXiv:1507.04227, August 2015.
[22] J. Matou?ek. On approximate geometric k-clustering. Discrete & Comput. Geom., 24(1):61?84, January
2000.
[23] R. R. Mettu and C. G. Plaxton. Optimal time bounds for approximate clustering. Mach. Learn., 56(1?
3):35?60, June 2004.
[24] R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the
k-means problem. J. ACM, 59(6):28, December 2012.
9
| 6309 |@word cu:28 version:1 polynomial:4 stronger:1 d2:1 confirms:1 hu:5 recursively:1 initial:2 uncovered:12 series:1 selecting:5 katoh:1 existing:5 current:3 com:1 discretization:1 assigning:1 must:1 numerical:1 subsequent:1 asymptote:1 seeding:1 update:1 pursued:1 selected:4 half:1 leaf:1 fewer:1 provides:3 location:2 successive:1 simpler:2 height:1 along:2 direct:3 khk:1 doubly:1 prove:3 symp:4 inside:1 indeed:1 hardness:3 expected:5 roughly:1 ming:1 decreasing:2 automatically:1 considering:1 increasing:1 becomes:1 spain:1 provided:5 moreover:1 bounded:6 what:1 minimizes:2 guarantee:8 golden:1 tie:1 exactly:2 ensured:1 ostrovsky:1 partitioning:1 sinop:1 before:1 positive:1 negligible:1 local:6 engineering:1 consequence:1 mach:2 mount:1 establishing:1 approximately:1 garg:1 therein:2 initialization:1 au:1 bi:11 range:1 practical:1 piatko:1 practice:3 union:2 recursive:1 silverman:1 intersect:1 discretizations:2 bell:1 matching:1 pre:1 varadarajan:2 cannot:2 convenience:1 context:2 applying:1 lagrangian:1 center:43 maximizing:1 starting:1 independently:1 ke:1 regarded:1 proving:1 tardos:1 diego:1 strengthen:1 exact:1 programming:2 hypothesis:2 synthesize:1 satisfying:2 updating:1 inaba:1 role:1 d16:1 adv:1 jaiswal:1 decrease:4 trade:3 valuable:1 mentioned:2 broken:2 complexity:1 nkd:1 ideally:1 inductively:1 depend:1 solving:1 upon:5 recogn:1 jain:2 neighborhood:1 refined:1 h0:1 heuristic:1 widely:1 supplementary:6 say:1 otherwise:2 ward:1 final:1 advantage:1 sen:1 product:1 relevant:1 combining:1 achieve:1 cluster:49 enhancement:1 produce:1 derive:1 linearize:2 nearest:1 minor:1 implies:7 come:1 exhibiting:1 direction:1 differ:1 sabharwal:1 closely:1 subsequently:2 munagala:1 stringent:1 material:6 pandit:1 assign:1 suffices:2 preliminary:1 randomization:2 tighter:1 hold:13 around:1 considered:1 makarychev:2 substituting:1 achieves:1 omitted:1 purpose:1 proc:6 ntd:1 combinatorial:1 hansen:1 establishes:2 weighted:1 awasthi:1 always:2 corollary:8 ax:1 focus:2 derived:1 june:5 improvement:3 hk:7 contrast:2 sense:4 voronoi:1 streaming:1 unlikely:1 aloise:1 overall:1 dual:1 exponent:1 raised:1 initialize:1 never:1 sampling:18 eliminated:1 sohler:1 future:1 minimized:1 np:4 report:6 employ:1 randomly:5 mettu:1 geometry:3 ab:6 freedom:1 recalling:1 highly:1 evaluation:1 deferred:2 truly:1 primal:1 devoted:1 arthur:1 euclidean:6 re:1 desired:1 increased:1 earlier:2 assignment:1 cost:5 deviation:1 subset:2 uniform:3 guha:1 front:1 chooses:1 combined:1 st:1 siam:3 sequel:1 off:3 invoke:1 together:2 na:7 satisfied:3 opposed:1 choose:1 monemizadeh:1 worse:1 ek:1 derivative:1 syst:1 potential:21 singleton:1 lloyd:9 b2:7 coresets:2 coefficient:14 int:6 satisfy:5 depends:2 later:1 closed:1 schema:1 contribution:5 minimize:2 square:2 variance:1 likewise:1 yield:6 conceptually:1 weak:1 shmoys:1 minj:1 reach:1 monteleoni:1 definition:3 involved:1 deshpande:2 proof:19 sampled:1 dataset:4 proved:1 popular:2 recall:1 subsection:1 improves:4 cj:4 organized:1 nimbhorkar:1 back:1 exceeded:1 planar:1 wei:1 improved:1 furthermore:3 just:1 implicit:1 lastly:1 dennis:1 nonlinear:1 grows:1 building:1 usa:1 requiring:1 true:3 inductive:7 former:1 assigned:2 facility:2 hence:2 laboratory:1 mahajan:1 attractive:1 yorktown:1 coincides:1 criterion:11 harmonic:1 recently:1 cohen:1 extend:1 discussed:2 refer:2 imposing:1 cv:63 feldman:1 rd:17 similarly:1 moving:1 specification:1 base:9 add:1 krishnaswamy:1 closest:1 showed:1 belongs:2 certain:1 inequality:4 arbitrarily:2 seen:2 minimum:5 ptas:3 greater:1 additional:1 paradigm:1 imai:1 relates:1 multiple:1 aggarwal:1 technical:6 calculation:1 offer:2 permitting:1 specializing:1 variant:3 expectation:10 metric:6 arxiv:4 iteration:4 c1:1 whereas:1 separately:2 completes:1 median:7 leaving:1 concluded:1 diagram:1 extra:1 subject:2 december:3 effectiveness:1 noting:1 intermediate:1 inner:1 regarding:2 knowing:1 vassilvitskii:1 expression:2 algebraic:2 constitute:1 useful:1 detailed:1 covered:7 clear:1 kanungo:1 oversampling:2 sign:1 klein:1 discrete:2 dasgupta:1 group:1 key:1 ht:4 relaxation:1 sum:2 year:1 extends:2 almost:1 throughout:1 wu:1 bound:23 ct:1 completing:1 guaranteed:1 sviridenko:1 rabani:1 span:1 min:6 kumar:1 charikar:2 ailon:1 according:1 alternate:1 department:1 march:4 smaller:3 slightly:1 appealing:1 addad:1 explained:1 taken:2 computationally:1 previously:1 remains:2 turn:2 eventually:1 letting:1 end:1 studying:1 appropriate:1 swamy:1 clustering:24 running:6 ensure:1 completed:4 remaining:2 log2:1 maintaining:1 k1:1 prof:1 february:1 unchanged:1 objective:1 occurs:2 strategy:1 dependence:1 exhibit:1 september:1 kth:1 distance:2 sci:1 trivial:1 reason:1 toward:1 induction:3 kannan:1 khandekar:1 ru:9 ratio:6 minimizing:2 plaxton:1 statement:1 stated:2 negative:1 upper:2 dispersion:1 datasets:3 markov:2 benchmark:1 finite:2 arya:1 january:3 defining:1 august:3 pair:2 required:2 specified:2 california:1 herein:2 established:3 barcelona:1 nip:1 beyond:1 usually:2 below:4 pattern:1 regime:2 geom:2 including:1 max:3 power:1 critical:1 event:1 rely:1 recursion:2 scheme:3 improve:1 imply:1 mathieu:1 axis:1 b10:5 deviate:1 review:1 geometric:1 schulman:1 determining:1 loss:1 proportional:2 proven:2 degree:1 sufficient:5 consistent:1 netanyahu:1 playing:1 ibm:2 last:3 free:1 jth:1 allow:1 wide:1 taking:6 curve:2 dimension:4 xn:3 lett:1 author:1 meyerson:1 refinement:1 coincide:1 adaptive:1 san:1 vazirani:1 approximate:3 supremum:1 summing:2 b1:9 assumed:1 xi:10 alternatively:1 search:7 learn:2 reasonably:1 parenthesized:1 obtaining:1 improving:1 complex:2 main:5 dense:1 nothing:1 x1:3 referred:3 strengthened:1 ny:1 deterministically:1 exponential:3 comput:5 lie:1 weighting:5 third:2 theorem:22 down:1 specific:1 showing:2 popat:1 symbol:1 consist:1 intrinsic:1 incorporating:1 albeit:1 adding:6 workshop:3 quantization:1 conditioned:1 chen:1 suited:1 intersection:2 pcm:1 doubling:1 applies:2 corresponds:3 satisfies:3 acm:4 abbreviation:1 goal:1 kmeans:1 careful:1 matou:1 hard:3 change:1 specifically:3 except:1 reducing:1 lemma:61 called:2 select:3 formally:1 latter:3 pertains:1 |
5,869 | 631 | Input Reconstruction Reliability Estimation
Dean A. Pomerleau
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
This paper describes a technique called Input Reconstruction Reliability Estimation
(IRRE) for determining the response reliability of a restricted class of multi-layer
perceptrons (MLPs). The technique uses a network's ability to accurately encode
the input pattern in its internal representation as a measure of its reliability. The
more accurately a network is able to reconstruct the input pattern from its internal
representation, the more reliable the network is considered to be. IRRE is provides
a good estimate of the reliability of MLPs trained for autonomous driving. Results
are presented in which the reliability estimates provided by IRRE are used to select
between networks trained for different driving situations.
1 Introduction
In many real world domains it is important to know the reliability of a network's response
since a single network cannot be expected to accurately handle all the possible inputs. Ideally,
a network should not only provide a response to a given input pattern, but also an indication of
the likelihood that its response is "correct". This reliability measure could be used to weight
the outputs from multiple networks and to determine when a new network needs to be trained.
This paper describes a technique for estimating a network's reliability called Input Reconstruction Reliability Estimation (IRRE). IRRE relies on the fact that the hidden representation
developed by an artificial neural network can be considered to be a compressed representation
of important input features. For example, when the network shown in Figure 1 is trained to
produce the correct steering direction from images of the road ahead, the hidden units learn to
encode the position and orientation of important features like the road edges and lane markers
(See [pomerleau, 1991] for more details). Because there are many fewer hidden units than
input units in the network, the hidden units cannot accurately represent all the details of an
279
280
Pomerleau
SUp
Ri&hI
30 Output
Units
3Ox32 SelUlor
Input Retina
Figure 1: Original driving network architecture.
arbitrary input pattern. Instead, the hidden units learn to devote their limited representational
capabilities to encoding the position and orientation of consistent, frequently-occurring features from the training set. When presented with an atypical input, such as a road with a
different number of lanes, the feature detectors developed by the hidden units will not be
capable of accurately encode all the actual input features.
Input Reconstruction Reliability Estimation exploits this limitation in representational capacity
to estimate a network's reliability. In IRRE, the network's internal representation is used to
reconstruct in the input pattern being presented. The more closely the reconstructed input
matches the actual input, the more familiar the input and hence the more reliable the network's
response.
2 Reconstructing the Input
IRRE utilized an additional set of output units to perform input reconstruction called the
encoder output array, as depicted in Figure 2. This second set of output units has the same
dimensionality as the input retina. In the experiments described in this paper, the input layer
and encoder output array have 30 rows and 32 columns. The desired activation for each of
these additional output units is identical to the activation of the corresponding input unit. In
essence, these additional output units turn the network into an autoencoder.
The network is trained using backpropagation both to produce the correct steering response
on the steering output units, and to reconstruct the input image as accurately as possible
on the encoder output array. During the training process, the network is presented with
several hundred images taken with a camera onboard our test vehicle as a person drives
(See [pomerleau, 1991] for more details). Training typically requires approximately 3 minutes
during which the person drives over a 1/4 to 1/2 mile stretch of road.
Input Reconstruction Reliability Estimation
Figure 2: Network architecture augmented to include an encoder output array.
During testing on a new stretch of road, images are presented to the network and activation is
propagated forward through the network to produce a steering response and a reconstructed
input image. The reliability of the steering response is estimated by computing the correlation
coefficient p(l , R) between the activation levels of units in the actual input image I and the
reconstructed input image R using the following formula:
p(I ,R)
= iR - I . II
01 OR
where I and R are the mean activation value of the actual and the reconstructed images, IR is
the mean of the set formed by the unit-wise product of the two images, and u/ and UR represent
the standard deviations of the activation values of each image. The higher the correlation
between the two images, the more reliable the network's response is estimated to be. The
reason correlation is used to measure the degrees of match between the two images is that,
unlike Euclidean distance, the correlation measure is invariant to differences in the mean
and variance between the two images. This is important since the mean and variance of the
input and the reconstructed images can sometimes vary, even when the input image depicts a
familiar situation.
3 Results and Applications
The degree of correlation between the actual and the reconstructed input images is an extremely
good indicator of network response accuracy in the domain of autonomous driving, as shown
in Figure 3. It shows a trained network's steering error and reconstruction error as the vehicle
drives down a quarter mile stretch of road that starts out as a single lane path and eventually
becomes a two-lane street. The solid line indicates the network's steering error, as measured
by the difference in turn curvature between the network's steering response and a person's
281
282
Pomerleau
Steering Error
CoJTelation Coeff"u:ienl
=0.92
ii.PU;Reoo~n&ror
1.25
0.04
~
g-;;-
0.03
~~
.....
b.OV
'Sv ........e
-
v.....
en
0.02
Intersection
?
1.00
~
c::~
o~
~l.
'a ..
0.75
~b:
0.50
8--
jg~
0.01
?
L -__________________________________
000
~
~
~
O~
I.....t__---- One Lane Road Im.gel----~..~nll"'..---.r--t..~1
Two Lane Road Images
Figure 3: Reconstruction error obtained using autoencoder reconstruction versus network
steering error over a stretch of one-lane and two-lane road.
steering response at that point along the road. The dashed line represents the network's
"reconstruction error", which is defined to be the degree of statical independence between the
actual and reconstructed images, or 1 - p(I , R}.
The two curves are nearly identical, having a correlation coefficient of 0.92. This close match
between the curves demonstrates that when the network is unable to accurately reconstruct the
input image, it is also probably suggesting an incorrect steering direction. Visual inspection
of the actual and reconstructed input images demonstrates that the degree of resemblance
between them is a good indication of the actual input's familiarity, as shown in Figure 4. It
depicts the input image, network response, and reconstructed input at the three points along the
road,labeled A, Band C in Figure 3. When presented with the image at point A, which closely
resembles patterns from training set, the network's reconstructed image closely resembles the
actual input, as shown by the close correspondence between the images labeled "Input Acts"
and "Reconstructed Input" in the left column of Figure 4. This close correspondence between
the input and reconstructed images suggests that the network can reliably steer in this situation.
It in fact it can steer accurately on this image, as demonstrated by the close match between the
network's steering response labeled "Output Acts" and the desired steering response labeled
''Target Acts" in the upper left corner of Figure 4.
When presented with a situation the network did not encounter during training, such as the
fork image and the two-lane road image shown in the other two columns of Figure 4, the
reconstructed image bears much less resemblance to the original input. This suggests that the
network is confused. This confusion results in an incorrect steering response, illustrated in
the discrepancy between the network's steering response and the target steering response for
the two atypical images.
The reliability prediction provided by IRRE has been used to improve the performance of
the neural network based autonomous driving system in a number of ways. The simplest is
Input Reconstruction Reliability Estimation
Figure 4: The actual input. the reconstructed input and the point-wise absolute difference
between them on a road image similar to those in the training set (labeled A), and on two
atypical images (labeled B and C).
283
284
Pomerleau
One Lane Net Rcconstuction Error
TW~ line-Net Rct;;n;tiiCtion Eir~;
1.00
'i,
t-' :! v?, ita
..
f \ :
~
I
0.60
0.40
\
\
1
.'
~
0.80
i ./
';. 1 'V:
\.J
II
I
\..
f
L i\
I
aI
II
!iJ ??
:!I\
?
I
,,':: \.:1 i !=
"\
."
\ ii
?
.:
\\!
,
~
?
II
? II
1 !i
I
II
y
(\
OJ
.
:: \\.-:
,.
,. ,.
0.20
.
~.
J=
'\i
,
,:
:
....::
\_?? .:
O.OOL.-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
I..
One Lane Images
---11-- Two Lane Images -..j
Figure 5: Reconstruction error of networks trained for one-lane road driving (solid line) and
two-lane road driving (dashed line),
to use IRRE to control vehicle speed. The more accurate the input reconstruction, the more
confident the network, and hence the faster the system drives the vehicle. A second use the
system makes of the reliability estimate provided by IRRE is to update the vehicle's position
on a rough map of the area being driven over. When the map indicates there should be an
intersection or other confusing situation up ahead, a subsequent sharp rise in reconstruction
error is a good indication that the vehicle has actually reached that location, allowing the
system to pinpoint the vehicle's position. Knowing the vehicle's location on a map is useful
for integrating symbolic processing, such as planning, into the neural network driving system
(for more details, see [pomerleau et al., 1991]).
Figure 5 illustrates how IRRE can be used to integrate the outputs from multiple expert
networks. The two lines in this graph illustrate the reconstruction error of two networks, one
trained to steer on one-lane roads (solid line), and the other trained to steer on two-lane roads
(dashed line). The reconstruction error for the one-lane network is low on the one-lane road
images, and high on the two-lane road images. The opposite is true for the network trained
for two-lane road driving. The reliability estimate provided by IRRE allows the system to
determine which network is most reliable for driving in the current situation. By simulating
multiple networks in parallel, and then selecting the one with the highest reliability, the system
have been able to drive the Navlab test vehicle on one, two and four lane roads automatically
at speeds of up to 30 miles per hour.
4 Discussion
The effectiveness of input reconstruction reliable estimation stems from the fact that the network has a small number of hidden units and is only trained in a narrow range of situations.
These constraints prevent the network from faithfully encoding arbitrary input patterns. Instead, the hidden units learn to encode features in the training images that are most important
for the task. Baldi and Hornik [Baldi & Hornik, 1989] have shown that if an autoencoder
Input Reconstruction Reliability Estimation
network with a single layer of N linear hidden units is trained with back-propagation, the activation levels of the hidden units will represent the first N principal components of the training
set. Since the units in the driving network are non-linear, this assertion does not strictly hold in
this case. However, Cottrell and Munro [Cottrell & Munro, 1988] have found empirically that
autoencoder networks with a sigmoidal activation function develop hidden units that span the
principal subspace of the training images, with some noise on the first principal component
due to network non-linearity. Because the principal components represent the dimensions
along which the training examples varies most, it can be shown that using linear combinations
of the principal components to represent the individual training patterns optimally preserves
the information contained in the training set [Linsker, 1989].
However the compressed representation developed by a linear autoencoder network is only
optimal for encoding images from the same distribution as the training set. When presented
with images very different from those in the training set, the image reconstructed from the
internal representation is not as accurate. The results presented in this paper demonstrate that
this reconstruction error can be employed to estimate the likelihood and magnitude of error in
MLPs trained for autonomous driving.
However the input reconstruction technique presented here has a serious potential shortcoming, namely that it forces the network's hidden units to encode all input features, including
potentially irrelevant ones. While this increased representation load on the hidden units has
the potential to degrade network performance, this effect has not been observed in the tests
conducted so far. In support of this finding, Gluck [Gluck, personal communications] has
found that forcing a network to autoencode its input frequently improves its generalization.
In [Gluck & Myers, 1992], Gluck and Myers use the representation developed by an autoencoder network as a model for simple types of learning in biological systems. The model
suggests that the hippocampus acts as an autoencoder, developing internal representations that
are then used to perform other tasks.
But if interference from the autoencoder task proves to be a problem, one way to eliminate
it would be to have separate groups of hidden units connected exclusively to one group of
outputs or the other. Having a separate set of hidden units for the autoencoder task would
ensure that the representation developed for the input reconstruction does not interfere with
representation developed for the "normal" task. It remains to be seen if this decoupling of
internal representations will adversely affect IRRE's ability to predict network errors.
As a technique for multi-network integration, IRRE has several advantages over existing connectionist arbitration methods, such as Hampshire and Waibel's Meta-Pi architecture [Hampshire & Waibel, 1992] and the Adaptive Mixture of Experts Model of Jacobs et
aI. [Jacobs et al., 1991]. It is a more modular approach, since each expert can be trained
entirely in isolation and then later combined with other experts without any additional training
by simply selecting the most reliable network for the current input. Since IRRE provides an
absolute measure of a single network's reliability, and not just a measure of how appropriate
a network is relative to others, IRRE can be also used to determine when none of the experts
is capable of coping with the current situation.
A potentially interesting extension to IRRE is the development of techniques for reasoning
about the difference between the actual input and the reconstructed input. For instance, it
should be possible to recognize when the vehicle has reached a fork in the road by the characteristic mistakes the network makes in reconstructing the input image. Another important
component of future work in is to test the ability of IRRE to estimate network reliability in
domains other than autonomous driving.
285
286
Pomerleau
Acknowledgements
I thank Dave Touretzky, Chuck Thorpe and the entire Unmanned Ground Vehicle Group at
CMU for their support and suggestions. Principle support for this research has come from
DARPA, under contracts "Perception for Outdoor Navigation" (contract number DACA7689-C-OOI4, monitored by the US Army Topographic Engineering Center) and ''Unmanned
Ground Vehicle System" (contract number DAAE07-90-C-R059, monitored by TACOM).
References
[Baldi & Hornik,1989] Baldi, P. and Hornik, K. (1989) Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, Vol.
2 pp. 53-58.
[Cottrell & Munro, 1988] Cottrell, G.W. and Munro, P. (1988) Principal components analysis
ofimages via back-propagation. Proc. Soc. ofPhoto-Opticallnstr. Eng., Cambridge MA.
[Gluck, personal communications] Gluck, M.A. (1992) Personal Communications. Rutgers
Univ., Newark NJ.
[Gluck & Myers, 1992] Gluck, M.A. and Myers, C.E. (1992) Hippocampal function in representation and generalization: a computational theory. Proc.1992 Cogn. Sci. Soc. Con/.
Hillsdale, NJ: Erlbaum Assoc.
[Hampshire & Waibel, 1992] Hampshire J.B. and Waibel, A.H. (1989) The Meta-Pi network:
building distributed knowledge representations for robust pattern recognition. IEEE
Trans. on Pattern Analysis and Machine Intelligence.
[Jacobs et al., 1991] Jacobs, R.A., Jordan, M.I. Nowlan, SJ. and Hinton, G.E. (1991) Adaptive mixtures of local experts. Neural Computation, 3 :1, Terrence Sejnowski (ed).
[Linsker, 1989] Linsker, R. (1989) Designing a sensory processing system: What can be
learned from principal component analysis? IBM Technical Report RC 14983 (#66896).
[pomerleau, 1991] Pomerleau, D.A. (1991) Efficient Training of Artificial Neural Networks
for Autonomous Navigation. Neural Computation 3:1, Terrence Sejnowski (ed).
[pomerleau et al., 1991] Pomerleau, D.A., Gowdy, J., Thorpe, C.E. (1991) Combining artificial neural networks and symbolic processing for autonomous robot guidance. Engineering Applications of Artificial Intelligence, 4:4 pp. 279-285.
| 631 |@word hippocampus:1 jacob:4 eng:1 solid:3 exclusively:1 selecting:2 existing:1 current:3 nowlan:1 activation:8 cottrell:4 subsequent:1 update:1 intelligence:2 fewer:1 inspection:1 provides:2 location:2 sigmoidal:1 rc:1 along:3 incorrect:2 baldi:4 expected:1 frequently:2 planning:1 multi:2 automatically:1 actual:11 becomes:1 provided:4 estimating:1 confused:1 linearity:1 what:1 developed:6 finding:1 nj:2 act:4 demonstrates:2 assoc:1 control:1 unit:25 engineering:2 local:2 mistake:1 encoding:3 path:1 approximately:1 resembles:2 suggests:3 limited:1 range:1 camera:1 testing:1 backpropagation:1 cogn:1 area:1 coping:1 road:22 integrating:1 symbolic:2 cannot:2 close:4 dean:1 demonstrated:1 map:3 center:1 array:4 handle:1 autonomous:7 target:2 us:1 designing:1 pa:1 recognition:1 utilized:1 labeled:6 observed:1 fork:2 eir:1 statical:1 connected:1 highest:1 ideally:1 personal:3 trained:14 ov:1 ror:1 darpa:1 univ:1 shortcoming:1 sejnowski:2 artificial:4 modular:1 reconstruct:4 compressed:2 encoder:4 ability:3 topographic:1 nll:1 advantage:1 indication:3 myers:4 net:2 reconstruction:21 product:1 combining:1 representational:2 produce:3 illustrate:1 develop:1 measured:1 ij:1 school:1 soc:2 come:1 direction:2 closely:3 correct:3 hillsdale:1 generalization:2 biological:1 im:1 strictly:1 extension:1 stretch:4 hold:1 considered:2 ground:2 normal:1 predict:1 driving:13 vary:1 estimation:8 proc:2 faithfully:1 rough:1 encode:5 likelihood:2 indicates:2 typically:1 eliminate:1 entire:1 hidden:15 orientation:2 development:1 integration:1 having:2 identical:2 represents:1 nearly:1 linsker:3 discrepancy:1 future:1 connectionist:1 others:1 report:1 serious:1 retina:2 thorpe:2 preserve:1 recognize:1 individual:1 familiar:2 mixture:2 navigation:2 accurate:2 edge:1 capable:2 euclidean:1 desired:2 guidance:1 increased:1 column:3 instance:1 steer:4 assertion:1 deviation:1 hundred:1 conducted:1 erlbaum:1 optimally:1 varies:1 sv:1 combined:1 confident:1 person:3 contract:3 terrence:2 corner:1 adversely:1 expert:6 suggesting:1 potential:2 coefficient:2 vehicle:12 later:1 sup:1 reached:2 start:1 capability:1 parallel:1 irre:18 mlps:3 formed:1 ir:2 accuracy:1 variance:2 characteristic:1 accurately:8 none:1 drive:5 dave:1 detector:1 touretzky:1 ed:2 pp:2 monitored:2 con:1 propagated:1 knowledge:1 dimensionality:1 improves:1 actually:1 back:2 higher:1 response:18 just:1 correlation:6 marker:1 propagation:2 interfere:1 resemblance:2 building:1 effect:1 true:1 hence:2 mile:3 illustrated:1 during:4 essence:1 hippocampal:1 demonstrate:1 confusion:1 onboard:1 reasoning:1 image:42 wise:2 quarter:1 empirically:1 mellon:1 cambridge:1 ai:2 jg:1 reliability:22 robot:1 pu:1 curvature:1 irrelevant:1 driven:1 forcing:1 meta:2 chuck:1 seen:1 minimum:1 additional:4 steering:17 employed:1 determine:3 dashed:3 ii:8 multiple:3 ofimages:1 stem:1 technical:1 match:4 faster:1 prediction:1 cmu:1 rutgers:1 represent:5 sometimes:1 navlab:1 unlike:1 probably:1 effectiveness:1 jordan:1 independence:1 affect:1 isolation:1 architecture:3 opposite:1 knowing:1 munro:4 ool:1 useful:1 band:1 simplest:1 estimated:2 per:1 carnegie:1 vol:1 group:3 four:1 prevent:1 graph:1 confusing:1 coeff:1 entirely:1 layer:3 hi:1 correspondence:2 ahead:2 constraint:1 ri:1 lane:21 speed:2 extremely:1 span:1 developing:1 waibel:4 combination:1 describes:2 reconstructing:2 ur:1 tw:1 restricted:1 invariant:1 interference:1 taken:1 remains:1 turn:2 eventually:1 know:1 rct:1 appropriate:1 simulating:1 encounter:1 original:2 include:1 ensure:1 exploit:1 prof:1 devote:1 subspace:1 distance:1 unable:1 separate:2 thank:1 capacity:1 street:1 sci:1 degrade:1 reason:1 gel:1 potentially:2 rise:1 pomerleau:12 reliably:1 perform:2 allowing:1 upper:1 situation:8 hinton:1 t__:1 communication:3 arbitrary:2 sharp:1 namely:1 unmanned:2 learned:1 narrow:1 hour:1 trans:1 able:2 pattern:10 perception:1 reliable:6 oj:1 including:1 force:1 indicator:1 improve:1 autoencoder:9 acknowledgement:1 autoencode:1 determining:1 relative:1 bear:1 interesting:1 limitation:1 suggestion:1 versus:1 ita:1 integrate:1 degree:4 consistent:1 principle:1 pi:2 ibm:1 row:1 absolute:2 distributed:1 curve:2 dimension:1 world:1 sensory:1 forward:1 adaptive:2 far:1 reconstructed:16 sj:1 pittsburgh:1 learn:3 robust:1 decoupling:1 hornik:4 domain:3 did:1 noise:1 augmented:1 en:1 depicts:2 ienl:1 position:4 pinpoint:1 outdoor:1 atypical:3 down:1 minute:1 formula:1 familiarity:1 load:1 magnitude:1 illustrates:1 occurring:1 gluck:8 depicted:1 intersection:2 simply:1 army:1 visual:1 contained:1 relies:1 ma:1 principal:8 hampshire:4 called:3 perceptrons:1 select:1 internal:6 support:3 newark:1 arbitration:1 |
5,870 | 6,310 | Phased LSTM: Accelerating Recurrent Network
Training for Long or Event-based Sequences
Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu
Institute of Neuroinformatics
University of Zurich and ETH Zurich
Zurich, Switzerland 8057
{dneil, pfeiffer, shih}@ini.uzh.ch
Abstract
Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for
extracting patterns from temporal sequences. However, current RNN models are
ill-suited to process irregularly sampled data triggered by events generated in
continuous time by sensors or other neurons. Such data can occur, for example,
when the input comes from novel event-driven artificial sensors that generate
sparse, asynchronous streams of events or from multiple conventional sensors with
different update intervals. In this work, we introduce the Phased LSTM model,
which extends the LSTM unit by adding a new time gate. This gate is controlled
by a parametrized oscillation with a frequency range that produces updates of the
memory cell only during a small percentage of the cycle. Even with the sparse
updates imposed by the oscillation, the Phased LSTM network achieves faster
convergence than regular LSTMs on tasks which require learning of long sequences.
The model naturally integrates inputs from sensors of arbitrary sampling rates,
thereby opening new areas of investigation for processing asynchronous sensory
events that carry timing information. It also greatly improves the performance of
LSTMs in standard RNN applications, and does so with an order-of-magnitude
fewer computes at runtime.
1
Introduction
Interest in recurrent neural networks (RNNs) has greatly increased in recent years, since larger
training databases, more powerful computing resources, and better training algorithms have enabled
breakthroughs in both processing and modeling of temporal sequences. Applications include speech
recognition [13], natural language processing [1, 20], and attention-based models for structured
prediction [5, 29]. RNNs are attractive because they equip neural networks with memories, and
the introduction of gating units such as LSTM and GRU [16, 6] has greatly helped in making the
learning of these networks manageable. RNNs are typically modeled as discrete-time dynamical
systems, thereby implicitly assuming a constant sampling rate of input signals, which also becomes
the update frequency of recurrent and feed-forward units. Although early work such as [25, 10, 4]
has realized the resulting limitations and suggested continuous-time dynamical systems approaches
towards RNNs, the great majority of modern RNN implementations uses fixed time steps.
Although fixed time steps are perfectly suitable for many RNN applications, there are several
important scenarios in which constant update rates impose constraints that affect the precision and
efficiency of RNNs. Many real-world tasks for autonomous vehicles or robots need to integrate input
from a variety of sensors, e.g. for vision, audition, distance measurements, or gyroscopes. Each sensor
may have its own data sampling rate, and short time steps are necessary to deal with sensors with
high sampling frequencies. However, this leads to an unnecessarily higher computational load and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
xt
xt
xt
Input it
Gate
Output Gate ot
xt
xt
t
Output
Gate
it Input
Gate
ct
ht
xt
c~t
ct
kt
Forget Gate ft
Forget Gate ft
xt
xt
(a)
t
ot
kt
ht
(b)
Figure 1: Model architecture. (a) Standard LSTM model. (b) Phased LSTM model, with time gate kt
controlled by timestamp t. In the Phased LSTM formulation, the cell value ct and the hidden output
ht can only be updated during an ?open? phase; otherwise, the previous values are maintained.
power consumption so that all units in the network can be updated with one time step. An interesting
new application area is processing of event-based sensors, which are data-driven, and record stimulus
changes in the world with short latencies and accurate timing. Processing the asynchronous outputs of
such sensors with time-stepped models would require high update frequencies, thereby counteracting
the potential power savings of event-based sensors. And finally there is an interest coming from
computational neuroscience, since brains can be viewed loosely as very large RNNs. However,
biological neurons communicate with spikes, and therefore perform asynchronous, event-triggered
updates in continuous time. This work presents a novel RNN model which can process inputs sampled
at asynchronous times and is described further in the following sections.
2
Model Description
Long short-term memory (LSTM) units [16] (Fig. 1(a)) are an important ingredient for modern deep
RNN architectures. We first define their update equations in the commonly-used version from [12]:
it
ft
ct
ot
ht
= ?i (xt Wxi + ht?1 Whi + wci ct?1 + bi )
= ?f (xt Wxf + ht?1 Whf + wcf ct?1 + bf )
= ft ct?1 + it ?c (xt Wxc + ht?1 Whc + bc )
= ?o (xt Wxo + ht?1 Who + wco ct + bo )
= ot ?h (ct )
(1)
(2)
(3)
(4)
(5)
The main difference to classical RNNs is the use of the gating functions it , ft , ot , which represent
the input, forget, and output gate at time t respectively. ct is the cell activation vector, whereas xt
and ht represent the input feature vector and the hidden output vector respectively. The gates use the
typical sigmoidal nonlinearities ?i , ?f , ?o and tanh nonlinearities ?c , and ?h with weight parameters
Whi , Whf , Who , Wxi , Wxf , and Wxo , which connect the different inputs and gates with the memory
cells and outputs, as well as biases bi , bf , and bo . The cell state ct itself is updated with a fraction of
the previous cell state that is controlled by ft , and a new input state created from the element-wise
(Hadamard) product, denoted by , of it and the output of the cell state nonlinearity ?c . Optional
peephole [11] connection weights wci , wcf , wco further influence the operation of the input, forget,
and output gates.
The Phased LSTM model extends the LSTM model by adding a new time gate, kt (Fig. 1(b)). The
opening and closing of this gate is controlled by an independent rhythmic oscillation specified by
three parameters; updates to the cell state ct and ht are permitted only when the gate is open. The
first parameter, ? , controls the real-time period of the oscillation. The second, ron , controls the ratio
of the duration of the ?open? phase to the full period. The third, s, controls the phase shift of the
oscillation to each Phased LSTM cell. All parameters can be learned during the training process.
Though other variants are possible, we propose here a particularly successful linearized formulation
2
t
j-2
j-1
j
Output
Output
Output
...
Layer 2
...
Layer 2
Layer 1
Input
...
Layer 2
Layer 1
Layer 1
tj-2
Input
kt Openness
closed
ct State
open
Input
tj-1
Input
1
tj
(a)
2
Time
3
4
(b)
Figure 2: Diagram of Phased LSTM behaviour. (a) Top: The rhythmic oscillations to the time gates of
3 different neurons; the period ? and the phase shift s is shown for the lowest neuron. The parameter
ron is the ratio of the open period to the total period ? . Bottom: Note that in a multilayer scenario,
the timestamp is distributed to all layers which are updated at the same time point. (b) Illustration of
Phased LSTM operation. A simple linearly increasing function is used as an input. The time gate
kt of each neuron has a different ? , identical phase shift s, and an open ratio ron of 0.05. Note that
the input (top panel) flows through the time gate kt (middle panel) to be held as the new cell state ct
(bottom panel) only when kt is open.
of the time gate, with analogy to the rectified linear unit that propagates gradients well:
?
2?t
1
?
?
,
if ?t < ron
?
?
2
? ron
(t ? s) mod ?
2?t
1
(6)
,
kt =
?t =
2?
, if ron < ?t < ron
?
?
?
?
r
2
on
?
?
??t ,
otherwise
?t is an auxiliary variable, which represents the phase inside the rhythmic cycle. The gate kt has three
phases (see Fig. 2a): in the first two phases, the "openness" of the gate rises from 0 to 1 (first phase)
and drops from 1 to 0 (second phase). During the third phase, the gate is closed and the previous cell
state is maintained. The leak with rate ? is active in the closed phase, and plays a similar role as the
leak in a parametric ?leaky? rectified linear unit [15] by propagating important gradient information
even when the gate is closed. Note that the linear slopes of kt during the open phases of the time gate
allow effective transmission of error gradients.
In contrast to traditional RNNs, and even sparser variants of RNNs [19], updates in Phased LSTM
can optionally be performed at irregularly sampled time points tj . This allows the RNNs to work with
event-driven, asynchronously sampled input data. We use the shorthand notation cj = ctj for cell
states at time tj (analogously for other gates and units), and let cj?1 denote the state at the previous
update time tj?1 . We can then rewrite the regular LSTM cell update equations for cj and hj (from
Eq. 3 and Eq. 5), using proposed cell updates cej and hej mediated by the time gate kj :
cej = fj cj?1 + ij ?c (xj Wxc + hj?1 Whc + bc )
(7)
cj = kj cej + (1 ? kj ) cj?1
(8)
e
hj = oj ?h (cej )
(9)
hj = kj hej + (1 ? kj ) hj?1
(10)
A schematic of Phased LSTM with its parameters can be found in Fig. 2a, accompanied by an
illustration of the relationship between the time, the input, the time gate kt , and the state ct in Fig. 2b.
One key advantage of this Phased LSTM formulation lies in the rate of memory decay. For the simple
task of keeping an initial memory state c0 as long as possible without receiving additional inputs (i.e.
ij = 0 at all time steps tj ), a standard LSTM with a nearly fully-opened forget gate (i.e. fj = 1 ? )
after n update steps would contain
cn = fn cn?1 = (1 ? ) (fn?1 cn?2 ) = . . . = (1 ? )n c0 .
(11)
3
1.0
1.0
0.5
0.5
0.5
0.0
0.0
0.0
?0.5
?0.5
?0.5
?1.0
?1.0
16 18 20 22 24 26 28 30
Time [ms]
(a)
Accuracy at 70 Epochs [%]
100
1.0
?1.0
16 18 20 22 24 26 28 30
Time [ms]
(b)
90
Phased LSTM
BN LSTM
LSTM
80
70
60
50
16 18 20 22 24 26 28 30
Time [ms]
(c)
High
Standard resolution Async.
sampling sampling sampling
(d)
Figure 3: Frequency discrimination task. The network is trained to discriminate waves of different
frequency sets (shown in blue and gray); every circle is an input point. (a) Standard condition: the
data is regularly sampled every 1 ms. (b) High resolution sampling condition: new input points
are gathered every 0.1ms. (c) Asynchronous sampling condition: new input points are presented at
intervals of 0.02 ms to 10 ms. (d) The accuracy of Phased LSTM under the three sampling conditions
is maintained, but the accuracy of the BN-LSTM and standard LSTM drops significantly in the
sampling conditions (b) and (c). Error bars indicate standard deviation over 5 runs.
This means the memory for < 1 decays exponentially with every time step. Conversely, the Phased
LSTM state only decays during the open periods of the time gate, but maintains a perfect memory
during its closed phase, i.e. cj = cj?? if kt = 0 for tj?? ? t ? tj . Thus, during a single oscillation
period of length ? , the units only update during a duration of ron ? ? , which will result in substantially
fewer than n update steps. Because of this cyclic memory, Phased LSTM can have much longer and
adjustable memory length via the parameter ? .
The oscillations impose sparse updates of the units, therefore substantially decreasing the total number
of updates during network operation. During training, this sparseness ensures that the gradient is
required to backpropagate through fewer updating timesteps, allowing an undecayed gradient to be
backpropagated through time and allowing faster learning convergence. Similar to the shielding of
the cell state ct (and its gradient) by the input gates and forget gates of the LSTM, the time gate
prevents external inputs and time steps from dispersing and mixing the gradient of the cell state.
3
Results
In the following sections, we investigate the advantages of the Phased LSTM model in a variety
of scenarios that require either precise timing of updates or learning from a long sequence. For all
the results presented here, the networks were trained with Adam [18] set to default learning rate
parameters, using Theano [2] with Lasagne [9]. Unless otherwise specified, the leak rate was set to
? = 0.001 during training and ? = 0 during test. The phase shift, s, for each neuron was uniformly
chosen from the interval [0, ? ]. The parameters ? and s were learned during training, while the open
ratio ron was fixed at 0.05 and not adjusted during training, except in the first task to demonstrate
that the model can train successfully while learning all parameters.
3.1
Frequency Discrimination Task
In this first experiment, the network is trained to distinguish two classes of sine waves from different
frequency sets: those with a period in a target range T ? U(5, 6), and those outside the range, i.e.
T ? {U(1, 5) ? U(6, 100)}, using U(a, b) for the uniform distribution on the interval (a, b). This
task illustrates the advantages of Phased LSTM, since it involves a periodic stimulus and requires
fine timing discrimination. The inputs are presented as pairs hy, ti, where y is the amplitude and t
the timestamp of the sample from the input sine wave.
Figure 3 illustrates the task: the blue curves must be separated from the lighter curves based on
the samples shown as circles. We evaluate three conditions for sampling the input signals: In the
standard condition (Fig. 3a), the sine waves are regularly sampled every 1 ms; in the oversampled
4
90
10 0
80
10 -1
Phased LSTM
BN LSTM
LSTM
75
70
MSE
Accuracy [%]
85
65
60
55
10 -3
10
50
45
0
10 -2
50
100
150 200
Epoch
250
-4
10 -5
0
300
(a)
LSTM
PLSTM (? ? e U(0;
PLSTM (? ? e U(2;
PLSTM (? ? e U(4;
PLSTM (? ? e U(6;
20
)
)
6)
)
8)
)
2)
4)
40
60
Epoch
80
100
(b)
Figure 4: (a) Accuracy during training for the superimposed frequencies task. The Phased LSTM
outperforms both LSTM and BN-LSTM while exhibiting lower variance. Shading shows maximum
and minimum over 5 runs, while dark lines indicate the mean. (b) Mean-squared error over training
on the addition task, with an input length of 500. Note that longer periods accelerate learning
convergence.
condition (Fig. 3b), the sine waves are regularly sampled every 0.1 ms, resulting in ten times
as many data points. Finally, in the asynchronously sampled condition (Fig. 3c), samples are
collected at asynchronous times over the duration of the input. Additionally, the sine waves have
a uniformly drawn random phase shift from all possible shifts, random numbers of samples drawn
from U(15, 125), a random duration drawn from U(15, 125), and a start time drawn from U(0, 125 ?
duration). The number of samples in the asynchronous and standard sampling condition is equal.
The classes were approximately balanced, yielding a 50% chance success rate.
Single-layer RNNs are trained on this data, each repeated with five random initial seeds. We compare
our Phased LSTM configuration to regular LSTM, and batch-normalized (BN) LSTM which has
found success in certain applications [14]. For the regular LSTM and the BN-LSTM, the timestamp
is used as an additional input feature dimension; for the Phased LSTM, the time input controls
the time gates kt . The architecture consists of 2-110-2 neurons for the LSTM and BN-LSTM, and
1-110-2 for the Phased LSTM. The oscillation periods of the Phased LSTMs are drawn uniformly in
the exponential space to give a wide variety of applicable frequencies, i.e., ? ? exp(U(0, 3)). All
other parameters match between models where applicable. The default LSTM parameters are given
in the Lasagne Theano implementation, and were kept for LSTM, BN-LSTM, and Phased LSTM.
Appropriate gate biasing was investigated but did not resolve the discrepancies between the models.
All three networks excel under standard sampling conditions as expected, as seen in Fig. 3d (left).
However, for the same number of epochs, increasing the data sampling by a factor of ten has
devastating effects for both LSTM and BN-LSTM, dropping their accuracy down to near chance
(Fig. 3d, middle). Presumably, if given enough training iterations, their accuracies would return to
the normal baseline. However, for the oversampled condition, Phased LSTM actually increases in
accuracy, as it receives more information about the underlying waveform. Finally, if the updates are
not evenly spaced and are instead sampled at asynchronous times, even when controlled to have the
same number of points as the standard sampling condition, it appears to make the problem rather
challenging for traditional state-of-the-art models (Fig. 3d, right). However, the Phased LSTM has
no difficulty with the asynchronously sampled data, because the time gates kt do not need regular
updates and can be correctly sampled at any continuous time within the period.
We extend the previous task by training the same RNN architectures on signals composed of two
sine waves. The goal is to distinguish signals composed of sine waves with periods T1 ? U(5, 6)
and T2 ? U(13, 15), each with independent phase, from signals composed of sine waves with
periods T1 ? {U(1, 5) ? U(6, 100)} and T2 ? {U(1, 13) ? U(15, 100)}, again with independent
phase. Despite being significantly more challenging, Fig. 4a demonstrates how quickly the Phased
LSTM converges to the correct solution compared to the standard approaches, using exactly the same
parameters. Additionally, the Phased LSTM appears to exhibit very low variance during training.
5
3
1
2
0
?10 5
1
2
Time [us]
(a)
(b)
3
(c)
Figure 5: N-MNIST experiment. (a) Sketch of digit movement seen by the image sensor. (b)
Frame-based representation of an ?8? digit from the N-MNIST dataset [24] obtained by integrating all
input spikes for each pixel. (c) Spatio-temporal representation of the digit, presented in three saccades
as in (a). Note that this representation shows the digit more clearly than the blurred frame-based one.
3.2
Adding Task
To investigate how introducing time gates helps learning when long memory is required, we revisit
an original LSTM task called the adding task [16]. In this task, a sequence of random numbers
is presented along with an indicator input stream. When there is a 0 in the indicator input stream,
the presented value should be ignored; a 1 indicates that the value should be added. At the end of
presentation the network produces a sum of all indicated values. Unlike the previous tasks, there is no
inherent periodicity in the input, and it is one of the original tasks that LSTM was designed to solve
well. This would seem to work against the advantages of Phased LSTM, but using a longer period for
the time gate kt could allow more effective training as a unit opens only a for a few timesteps during
training.
In this task, a sequence of numbers (of length 490 to 510) was drawn from U(?0.5, 0.5). Two
numbers in this stream of numbers are marked for addition: one from the first 10% of numbers
(drawn with uniform probability) and one in the last half (drawn with uniform probability), producing
a model of a long and noisy stream of data with only few significant points. Importantly, this should
challenge the Phased LSTM model because there is no inherent periodicity and every timestep could
contain the important marked points.
The same network architecture is used as before. The period ? was drawn uniformly in the exponential domain, comparing four sampling intervals exp(U(0, 2)), exp(U(2, 4)), exp(U(4, 6)), and
exp(U(6, 8)). Note that despite different ? values, the total number of LSTM updates remains approximately the same, since the overall sparseness is set by ron . However, a longer period ? provides
a longer jump through the past timesteps for the gradient during backpropagation-through-time.
Moreover, we investigate whether the model can learn longer sequences more effectively when longer
periods are used. By varying the period ? , the results in Fig. 4b show longer ? accelerates training of
the network to learn much longer sequences faster.
3.3
N-MNIST Event-Based Visual Recognition
To test performance on real-world asynchronously sampled data, we make use of the publiclyavailable N-MNIST [24] dataset for neuromorphic vision. The recordings come from an event-based
vision sensor that is sensitive to local temporal contrast changes [26]. An event is generated from
a pixel when its local contrast change exceeds a threshold. Every event is encoded as a 4-tuple
hx, y, p, ti with position x, y of the pixel, a polarity bit p (indicating a contrast increase or decrease),
and a timestamp t indicating the time when the event is generated. The recordings consist of events
generated by the vision sensor while the sensor undergoes three saccadic movements facing a static
digit from the MNIST dataset (Fig. 5a). An example of the event responses can be seen in Fig. 5c).
In previous work using event-based input data [21, 23], the timing information was sometimes
removed and instead a frame-based representation was generated by computing the pixel-wise
event-rate over some time period (as shown in Fig. 5(b)). Note that the spatio-temporal surface of
6
Table 1: Accuracy on N-MNIST
CNN
BN-LSTM
Phased LSTM (? = 100ms)
Accuracy at Epoch 1
Train/test ? = 0.75
73.81% ? 3.5
95.02% ? 0.3
40.87% ? 13.3
96.93% ? 0.12
90.32% ? 2.3
97.28% ? 0.1
Test with ? = 0.4
Test with ? = 1.0
90.67% ? 0.3
94.99% ? 0.3
94.79% ? 0.03
96.55% ? 0.63
95.11% ? 0.2
97.27% ? 0.1
3153 per neuron
159 ? 2.8 per neuron
LSTM Updates
?
events in Fig. 5(c) reveals details of the digit much more clearly than in the blurred frame-based
representation.The Phased LSTM allows us to operate directly on such spatio-temporal event streams.
Table 1 summarizes classification results for three different network types: a CNN trained on framebased representations of N-MNIST digits and two RNNs, a BN-LSTM and a Phased LSTM, trained
directly on the event streams. Regular LSTM is not shown, as it was found to perform worse. The
CNN was comprised of three alternating layers of 8 kernels of 5x5 convolution with a leaky ReLU
nonlinearity and 2x2 max-pooling, which were then fully-connected to 256 neurons, and finally fullyconnected to the 10 output classes. The event pixel address was used to produce a 40-dimensional
embedding via a learned embedding matrix [9], and combined with the polarity to produce the input.
Therefore, the network architecture was 41-110-10 for the Phased LSTM and 42-110-10 for the
BN-LSTM, with the time given as an extra input dimension to the BN-LSTM.
Table 1 shows that Phased LSTM trains faster than alternative models and achieves much higher
accuracy with a lower variance even within the first epoch of training. We further define a factor, ?,
which represents the probability that an event is included, i.e. ? = 1.0 means all events are included.
The RNN models are trained with ? = 0.75, and again the Phased LSTM achieves slightly higher
performance than the BN-LSTM model. When testing with ? = 0.4 (fewer events) and ? = 1.0 (more
events) without retraining, both RNN models perform well and greatly outperform the CNN. This is
because the accumulated statistics of the frame-based input to the CNN change drastically when the
overall spike rates are altered. The Phased LSTM RNNs seem to have learned a stable spatio-temporal
surface on the input and are only slightly altered by sampling it more or less frequently.
Finally, as each neuron of the Phased LSTM only updates about 5% of the time, on average, 159
updates are needed in comparison to the 3153 updates needed per neuron of the BN-LSTM, leading
to an approximate twenty-fold reduction in run time compute cost. It is also worth noting that these
results form a new state-of-the-art accuracy for this dataset [24, 7].
3.4
Visual-Auditory Sensor Fusion for Lip Reading
Finally, we demonstrate the use of Phased LSTM on a task involving sensors with different sampling
rates. Few RNN models ever attempt to merge sensors of different input frequencies, although the
sampling rates can vary substantially. For this task, we use the GRID dataset [8]. This corpus contains
video and audio of 30 speakers each uttering 1000 sentences composed of a fixed grammar and a
constrained vocabulary of 51 words. The data was randomly divided into a 90%/10% train-test set.
An OpenCV [17] implementation of a face detector was used on the video stream to extract the face
which was then resized to grayscale 48x48 pixels. The goal here is to obtain a model that can use
audio alone, video alone, or both inputs to robustly classify the sentence. However, since the audio
alone is sufficient to achieve greater than 99% accuracy, sensor modalities were randomly masked to
zero during training to encourage robustness towards sensory noise and loss.
The network architecture first separately processes video and audio data before merging them in
two RNN layers that receive both modalities. The video stream uses three alternating layers of 16
kernels of 5x5 convolution and 2x2 subsampling to reduce the input of 1x48x48 to 16x2x2, which is
then used as the input to 110 recurrent units. The audio stream connects the 39-dimensional MFCCs
(13 MFCCs with first and second derivatives) to 150 recurrent units. Both streams converge into
the Merged-1 layer with 250 recurrent units, and is connected to a second hidden layer with 250
recurrent units named Merged-2. The output of the Merged-2 layer is fully-connected to 51 output
nodes, which represent the vocabulary of GRID. For the Phased LSTM network, all recurrent units
are Phased LSTM units.
7
Low Res.
Loss
10 -1
MFCCs
Video
Frames
Phased LSTM
BN LSTM
LSTM
10 -2
Video
PLSTM
Merged-1
PLSTM
Merged-2
PLSTM
220
260 300
Time [ms]
(a)
340
0
5
10
15
20
25
30
35
10 -1
High Res.
Loss
Audio
PLSTM
MFCC
kj Openness
Inputs
Time
500
1500 2500
Time [ms]
(b)
10 -2
0
10
20 30
Epoch
40
50
(c)
Figure 6: Lip reading experiment. (a) Inputs and openness of time gates for the lip reading experiment.
Note that the 25fps video frame rate is a multiple of the audio input frequency (100 Hz). Phased
LSTM timing parameters are configured to align to the sampling time of their inputs. (b) Example
input of video (top) and audio (bottom). (c) Test loss using the video stream alone. Video frame rate
is 40ms. Top: low resolution condition, MFCCs computed every 40ms with a network update every
40 ms; Bottom: high resolution condition, MFCCs every 10 ms with a network update every 10 ms.
In the audio and video Phased LSTM layers, we manually align the open periods of the time gates
to the sampling times of the inputs and disable learning of the ? and s parameters (see Fig. 6a).
This prevents presenting zeros or artificial interpolations to the network when data is not present.
In the merged layers, however, the parameters of the time gate are learned, with the period ? of the
first merged layer drawn from U(10, 1000) and the second from U(500, 3000). Fig. 6b shows a
visualization of one frame of video and the complete duration of an audio sample.
During evaluation, all networks achieve greater than 98% accuracy on audio-only and combined
audio-video inputs. However, video-only evaluation with an audio-video capable network proved
the most challenging, so the results in Fig. 6c focus on these results (though result rankings are
representative of all conditions). Two differently-sampled versions of the data were used: In the first
?low resolution? version (Fig. 6c, top), the sampling rate of the MFCCs was matched to the sampling
rate of the 25 fps video. In the second ?high-resolution? condition, the sampling rate was set to the
more common value of 100 Hz sampling frequency (Fig. 6c, bottom and shown in Fig. 6a). The
higher audio sampling rate did not increase accuracy, but allows for a faster latency (10ms instead of
40ms). The Phased LSTM again converges substantially faster than both LSTM and batch-normalized
LSTM. The peak accuracy of 81.15% compares favorably against lipreading-focused state-of-the-art
approaches [28] while avoiding manually-crafted features.
4
Discussion
The Phased LSTM has many surprising advantages. With its rhythmic periodicity, it acts like a
learnable, gated Fourier transform on its input, permitting very fine timing discrimination. Alternatively, the rhythmic periodicity can be viewed as a kind of persistent dropout that preserves state [27],
enhancing model diversity. The rhythmic inactivation can even be viewed as a shortcut to the past
for gradient backpropagation, accelerating training. The presented results support these interpretations, demonstrating the ability to discriminate rhythmic signals and to learn long memory traces.
Importantly, in all experiments, Phased LSTM converges more quickly and theoretically requires
only 5% of the computes at runtime, while often improving in accuracy compared to standard LSTM.
The presented methods can also easily be extended to GRUs [6], and it is likely that even simpler
models, such as ones that use a square-wave-like oscillation, will perform well, thereby making even
more efficient and encouraging alternative Phased LSTM formulations. An inspiration for using
oscillations in recurrent networks comes from computational neuroscience [3], where rhythms have
been shown to play important roles for synchronization and plasticity [22]. Phased LSTMs were
not designed as biologically plausible models, but may help explain some of the advantages and
robustness of learning in large spiking recurrent networks.
8
References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
arXiv preprint arXiv:1409.0473, 2014.
[2] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for
scientific computing conference (SciPy), volume 4, page 3, 2010.
[3] G. Buzsaki. Rhythms of the Brain. Oxford University Press, 2006.
[4] G. Cauwenberghs. An analog VLSI recurrent neural network learning a continuous-time trajectory. IEEE
Transactions on Neural Networks, 7(2):346?361, 1996.
[5] K. Cho, A. Courville, and Y. Bengio. Describing multimedia content using attention-based encoder-decoder
networks. IEEE Transactions on Multimedia, 17(11):1875?1886, 2015.
[6] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078,
2014.
[7] G. K. Cohen, G. Orchard, S. H. Ieng, J. Tapson, R. B. Benosman, and A. van Schaik. Skimming digits:
Neuromorphic classification of spike-encoded images. Frontiers in Neuroscience, 10(184), 2016.
[8] M. Cooke, J. Barker, S. Cunningham, and X. Shao. An audio-visual corpus for speech perception and
automatic speech recognition. The Journal of the Acoustical Society of America, 120(5):2421?2424, 2006.
[9] S. Dieleman et al. Lasagne: First release., Aug. 2015.
[10] K.-I. Funahashi and Y. Nakamura. Approximation of dynamical systems by continuous time recurrent
neural networks. Neural Networks, 6(6):801?806, 1993.
[11] F. A. Gers and J. Schmidhuber. Recurrent nets that time and count. In Neural Networks, 2000. IJCNN
2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on, volume 3, pages 189?194.
IEEE, 2000.
[12] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[13] A. Graves, A.-R. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks.
In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages
6645?6649, 2013.
[14] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta,
A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567,
2014.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on
imagenet classification. In The IEEE International Conference on Computer Vision (ICCV), 2015.
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997.
[17] Itseez. Open source computer vision library. https://github.com/itseez/opencv, 2015.
[18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[19] J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511,
2014.
[20] T. Mikolov, M. Karafi?t, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based
language model. Interspeech, 2:3, 2010.
[21] D. Neil and S.-C. Liu. Effective sensor fusion with event-based sensors and deep network architectures. In
IEEE Int. Symposium on Circuits and Systems (ISCAS), 2016.
[22] B. Nessler, M. Pfeiffer, L. Buesing, and W. Maass. Bayesian computation emerges in generic cortical
microcircuits through spike-timing-dependent plasticity. PLoS Comput Biol, 9(4):e1003037, 2013.
[23] P. O?Connor, D. Neil, S.-C. Liu, T. Delbruck, and M. Pfeiffer. Real-time classification and sensor fusion
with a spiking Deep Belief Network. Frontiers in Neuroscience, 7, 2013.
[24] G. Orchard, A. Jayawant, G. Cohen, and N. Thakor. Converting static image datasets to spiking neuromorphic datasets using saccades. arXiv: 1507.07629, 2015.
[25] B. A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation,
1(2):263?269, 1989.
[26] C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck. Retinomorphic event-based
vision sensors: bioinspired cameras with spiking outputs. Proceedings of the IEEE, 102(10):1470?1484,
2014.
[27] S. Semeniuta, A. Severyn, and E. Barth. Recurrent dropout without memory loss. arXiv, arXiv:1603.05118,
2016.
[28] M. Wand, J. Koutn?k, and J. Schmidhuber. Lipreading with long short-term memory. arXiv preprint
arXiv:1601.08188, 2016.
[29] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend
and tell: Neural image caption generation with visual attention. In International Conference on Machine
Learning, 2015.
9
| 6310 |@word cnn:5 middle:2 version:3 manageable:1 wco:2 retraining:1 bf:2 c0:2 open:13 linearized:1 bn:16 thereby:4 shading:1 carry:1 reduction:1 initial:2 liu:3 cyclic:1 configuration:1 contains:1 daniel:1 bc:2 outperforms:1 past:2 current:1 comparing:1 com:1 surprising:1 activation:1 must:1 gpu:1 fn:2 plasticity:2 drop:2 designed:2 update:28 discrimination:4 alone:4 half:1 fewer:4 short:5 funahashi:1 record:1 schaik:1 provides:1 pascanu:1 node:1 ron:10 math:1 sigmoidal:1 simpler:1 zhang:1 five:1 along:1 become:1 symposium:1 fps:2 persistent:1 shorthand:1 consists:1 fullyconnected:1 inside:1 introduce:1 theoretically:1 expected:1 frequently:1 kiros:1 brain:2 salakhutdinov:1 decreasing:1 resolve:1 encouraging:1 cpu:1 increasing:2 becomes:1 spain:1 notation:1 underlying:1 panel:3 moreover:1 circuit:1 lowest:1 matched:1 kind:1 substantially:4 temporal:7 every:12 ti:2 act:1 runtime:2 exactly:1 demonstrates:1 control:4 unit:17 producing:1 t1:2 before:2 attend:1 timing:8 local:2 despite:2 oxford:1 interpolation:1 approximately:2 merge:1 rnns:14 lasagne:3 conversely:1 challenging:3 catanzaro:1 bi:2 range:3 phased:50 camera:1 testing:1 bioinspired:1 whc:2 backpropagation:2 digit:8 area:2 rnn:13 eth:1 significantly:2 burget:1 word:1 integrating:1 regular:6 influence:1 nessler:1 conventional:1 imposed:1 clockwork:1 attention:3 duration:6 focused:1 resolution:6 barker:1 scipy:1 importantly:2 lamblin:1 enabled:1 embedding:2 autonomous:1 updated:4 target:1 play:2 caption:1 lighter:1 us:2 element:1 recognition:5 particularly:1 updating:1 database:1 bottom:5 ft:6 role:2 preprint:7 ensures:1 cycle:2 connected:3 sun:1 plo:1 movement:2 decrease:1 removed:1 balanced:1 leak:3 warde:1 trained:7 rewrite:1 efficiency:1 shao:1 accelerate:1 easily:1 joint:1 differently:1 schwenk:1 icassp:1 america:1 train:4 separated:1 effective:3 prenger:1 artificial:2 zemel:1 tell:1 outside:1 neuroinformatics:1 whi:2 larger:1 solve:1 encoded:2 plausible:1 otherwise:3 grammar:1 ability:1 statistic:1 neil:3 peephole:1 encoder:2 transform:1 itself:1 noisy:1 asynchronously:4 jointly:1 sequence:10 triggered:2 advantage:6 net:1 inn:1 propose:1 coming:1 product:1 serrano:1 hadamard:1 mixing:1 translate:1 achieve:2 buzsaki:1 description:1 convergence:3 transmission:1 produce:4 generating:1 perfect:1 adam:2 converges:3 help:2 recurrent:19 propagating:1 ij:2 aug:1 eq:2 auxiliary:1 involves:1 come:3 indicate:2 exhibiting:1 switzerland:1 waveform:1 merged:7 correct:1 opened:1 stochastic:1 human:1 require:3 behaviour:1 hx:1 investigation:1 koutn:1 merrienboer:1 biological:1 whf:2 adjusted:1 frontier:2 normal:1 exp:5 great:1 seed:1 presumably:1 dieleman:1 opencv:2 desjardins:1 achieves:3 early:1 vary:1 integrates:1 applicable:2 tanh:1 sensitive:1 successfully:1 clearly:2 sensor:22 ctj:1 openness:4 rather:1 hj:5 inactivation:1 resized:1 varying:1 release:1 focus:1 superimposed:1 indicates:1 greatly:4 contrast:4 baseline:1 dependent:1 accumulated:1 typically:1 cunningham:1 hidden:3 vlsi:1 enns:1 pixel:6 overall:2 classification:4 ill:1 denoted:1 retinomorphic:1 sengupta:1 art:4 breakthrough:1 constrained:1 timestamp:5 equal:1 saving:1 sampling:27 manually:2 identical:1 represents:2 unnecessarily:1 devastating:1 nearly:1 discrepancy:1 t2:2 stimulus:2 inherent:2 opening:2 few:3 modern:2 randomly:2 composed:4 preserve:1 phase:18 connects:1 iscas:1 attempt:1 interest:2 x2x2:1 investigate:3 evaluation:2 yielding:1 farley:1 tj:9 held:1 kt:16 accurate:1 tuple:1 encourage:1 capable:1 necessary:1 posch:1 x48:1 unless:1 loosely:1 circle:2 re:2 elsen:1 increased:1 classify:1 modeling:1 delbruck:2 neuromorphic:3 phrase:1 cost:1 introducing:1 deviation:1 uniform:3 comprised:1 masked:1 successful:1 connect:1 periodic:1 koutnik:1 combined:2 cho:4 grus:1 lstm:92 peak:1 international:4 receiving:1 michael:1 analogously:1 quickly:2 squared:1 again:3 severyn:1 worse:1 external:1 audition:1 derivative:1 leading:1 return:1 potential:1 nonlinearities:2 diversity:1 accompanied:1 bergstra:1 int:1 blurred:2 configured:1 ranking:1 stream:12 vehicle:1 helped:1 performed:1 closed:5 hej:2 sine:8 compiler:1 wave:10 start:1 maintains:1 cauwenberghs:1 slope:1 square:1 accuracy:17 variance:3 who:2 gathered:1 spaced:1 chii:1 buesing:1 bayesian:1 ren:1 trajectory:2 worth:1 rectified:2 mfcc:1 detector:1 explain:1 against:2 frequency:13 mohamed:1 naturally:1 wxo:2 static:2 sampled:13 auditory:1 dataset:5 proved:1 emerges:1 improves:1 cj:8 amplitude:1 publiclyavailable:1 actually:1 barth:1 appears:2 feed:1 higher:4 permitted:1 response:1 formulation:4 though:2 microcircuit:1 sketch:1 receives:1 lstms:4 undergoes:1 gray:1 indicated:1 scientific:1 effect:1 contain:2 normalized:2 inspiration:1 alternating:2 linares:1 maass:1 deal:1 attractive:1 x5:2 during:21 interspeech:1 maintained:3 speaker:1 rhythm:2 m:19 ini:1 presenting:1 complete:1 demonstrate:2 pearlmutter:1 fj:2 greff:1 image:4 wise:2 novel:2 barranco:1 common:1 spiking:4 cohen:2 exponentially:1 volume:2 extend:1 interpretation:1 analog:1 he:1 surpassing:1 bougares:1 measurement:1 significant:1 connor:1 automatic:1 grid:2 closing:1 nonlinearity:2 language:2 mfccs:6 robot:1 stable:1 longer:9 surface:2 align:3 own:1 recent:1 driven:3 scenario:3 schmidhuber:4 certain:1 success:2 lipreading:2 seen:3 minimum:1 additional:2 greater:2 impose:2 disable:1 converting:1 converge:1 period:21 signal:7 multiple:2 full:1 exceeds:1 faster:6 match:1 long:10 divided:1 permitting:1 controlled:5 schematic:1 prediction:1 variant:2 involving:1 multilayer:1 vision:7 enhancing:1 arxiv:17 iteration:1 represent:3 sometimes:1 kernel:2 hochreiter:1 cell:16 receive:1 whereas:1 addition:2 fine:2 separately:1 interval:5 diagram:1 source:1 modality:2 ot:5 operate:1 unlike:1 extra:1 breuleux:1 recording:2 pooling:1 hz:2 bahdanau:1 regularly:3 flow:1 mod:1 seem:2 extracting:1 counteracting:1 near:1 noting:1 bengio:5 enough:1 variety:3 affect:1 xj:1 timesteps:3 relu:1 architecture:8 perfectly:1 reduce:1 cn:3 shift:6 whether:1 expression:1 accelerating:2 speech:7 deep:6 ignored:1 latency:2 dark:1 backpropagated:1 ten:2 generate:1 http:1 outperform:1 percentage:1 coates:1 wci:2 revisit:1 async:1 neuroscience:4 correctly:1 per:3 blue:2 discrete:1 dropping:1 key:1 shih:2 four:1 threshold:1 demonstrating:1 drawn:10 ht:10 kept:1 timestep:1 fraction:1 year:1 sum:1 wand:1 run:3 powerful:1 communicate:1 named:1 extends:2 oscillation:11 summarizes:1 scaling:1 bit:1 dropout:2 accelerates:1 layer:17 ct:16 distinguish:2 courville:2 gomez:1 fold:1 occur:1 ijcnn:1 constraint:1 x2:2 hy:1 fourier:1 mikolov:1 structured:1 orchard:2 wxi:2 slightly:2 karafi:1 making:2 biologically:1 iccv:1 theano:3 resource:1 zurich:3 equation:2 remains:1 visualization:1 describing:1 count:1 hannun:1 needed:2 irregularly:2 end:3 gulcehre:1 operation:3 appropriate:1 generic:1 robustly:1 batch:2 alternative:2 robustness:2 gate:41 original:2 top:5 include:1 subsampling:1 classical:1 society:1 added:1 realized:1 spike:5 parametric:1 saccadic:1 traditional:2 exhibit:1 gradient:9 distance:1 parametrized:1 majority:1 consumption:1 evenly:1 stepped:1 decoder:2 acoustical:1 collected:1 equip:1 assuming:1 length:4 modeled:1 relationship:1 illustration:2 ratio:4 polarity:2 optionally:1 favorably:1 trace:1 rise:1 ba:2 implementation:3 satheesh:1 adjustable:1 perform:4 allowing:2 twenty:1 gated:1 neuron:12 convolution:2 datasets:2 optional:1 extended:1 ever:1 precise:1 hinton:1 frame:9 arbitrary:1 pair:1 gru:1 specified:2 wxf:2 connection:1 required:2 oversampled:2 sentence:2 imagenet:1 acoustic:1 learned:5 barcelona:1 kingma:1 nip:1 address:1 suggested:1 bar:1 dynamical:3 pattern:1 perception:1 biasing:1 reading:3 challenge:1 gyroscope:1 oj:1 memory:15 max:1 belief:1 video:17 power:2 event:28 suitable:1 natural:1 difficulty:1 nakamura:1 cernock:1 indicator:2 pfeiffer:4 altered:2 github:1 library:1 created:1 excel:1 mediated:1 extract:1 kj:6 epoch:7 python:1 graf:2 synchronization:1 fully:3 loss:5 interesting:1 limitation:1 generation:1 analogy:1 facing:1 ingredient:1 integrate:1 uzh:1 sufficient:1 propagates:1 translation:2 cooke:1 casper:1 periodicity:4 last:1 asynchronous:9 keeping:1 drastically:1 bias:1 allow:2 institute:1 wide:1 face:2 rhythmic:7 sparse:3 leaky:2 distributed:1 van:2 curve:2 wcf:2 default:2 world:3 dimension:2 vocabulary:2 computes:2 sensory:2 forward:1 commonly:1 jump:1 uttering:1 transaction:2 approximate:1 implicitly:1 active:1 reveals:1 corpus:2 spatio:4 gotarredona:1 alternatively:1 grayscale:1 continuous:6 table:3 additionally:2 lip:3 learn:3 delving:1 improving:1 mse:1 investigated:1 domain:1 did:2 cortical:1 main:1 linearly:1 noise:1 turian:1 repeated:1 xu:1 wxc:2 fig:23 representative:1 crafted:1 precision:1 position:1 gers:1 exponential:2 comput:1 lie:1 third:2 down:1 load:1 xt:13 shielding:1 bastien:1 gating:2 rectifier:1 learnable:1 decay:3 fusion:3 consist:1 mnist:7 adding:4 effectively:1 merging:1 diamos:1 magnitude:1 illustrates:2 sparseness:2 sparser:1 suited:1 backpropagate:1 forget:6 likely:1 visual:4 prevents:2 khudanpur:1 bo:2 cej:4 saccade:2 ch:1 chance:2 semeniuta:1 viewed:3 goal:2 presentation:1 marked:2 towards:2 shortcut:1 change:4 content:1 included:2 typical:1 except:1 uniformly:4 total:3 called:1 discriminate:2 multimedia:2 indicating:2 support:1 avoiding:1 evaluate:1 audio:15 biol:1 |
5,871 | 6,311 | Iterative Refinement of the Approximate Posterior for
Directed Belief Networks
R Devon Hjelm
University of New Mexico and the Mind Research Network
[email protected]
Kyunghyun Cho
Courant Institute & Center for Data Science, New York University
[email protected]
Junyoung Chung
University of Montreal
[email protected]
Russ Salakhutdinov
Carnegie Melon University
[email protected]
Vince Calhoun
University of New Mexico and the Mind Research Network
[email protected]
Nebojsa Jojic
Microsoft Research
[email protected]
Abstract
Variational methods that rely on a recognition network to approximate the posterior
of directed graphical models offer better inference and learning than previous
methods. Recent advances that exploit the capacity and flexibility in this approach
have expanded what kinds of models can be trained. However, as a proposal for the
posterior, the capacity of the recognition network is limited, which can constrain the
representational power of the generative model and increase the variance of Monte
Carlo estimates. To address these issues, we introduce an iterative refinement
procedure for improving the approximate posterior of the recognition network and
show that training with the refined posterior is competitive with state-of-the-art
methods. The advantages of refinement are further evident in an increased effective
sample size, which implies a lower variance of gradient estimates.
1
Introduction
Variational methods have surpassed traditional methods such as Markov chain Monte Carlo [MCMC,
15] and mean-field coordinate ascent [25] as the de-facto standard approach for training directed
graphical models. Helmholtz machines [3] are a type of directed graphical model that approximate
the posterior distribution with a recognition network that provides fast inference as well as flexible
learning which scales well to large datasets. Many recent significant advances in training Helmholtz
machines come as estimators for the gradient of the objective w.r.t. the approximate posterior. The
most successful of these methods, variational autoencoders [VAE, 12], relies on a re-parameterization
of the latent variables to pass the learning signal to the recognition network. This type of parameterization, however, is not available with discrete units, and the naive Monte Carlo estimate of the
gradient has too high variance to be practical [3, 12].
However, good estimators are available through importance sampling [1], input-dependent baselines
[13], a combination baselines and importance sampling [14], and parametric Taylor expansions [9].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Each of these methods strive to be a lower-variance and unbiased gradient estimator. However, the
reliance on the recognition network means that the quality of learning is bounded by the capacity of
the recognition network, which in turn raises the variance.
We demonstrate reducing the variance of Monte Carlo based estimators by iteratively refining
the approximate posterior provided by the recognition network. The complete learning algorithm
follows expectation-maximization [EM, 4, 16], where in the E-step the variational parameters of
the approximate posterior are initialized using the recognition network, then iteratively refined. The
refinement procedure provides an asymptotically-unbiased estimate of the variational lowerbound,
which is tight w.r.t. the true posterior and can be used to easily train both the recognition network and
generative model during the M-step. The variance-reducing refinement is available to any directed
graphical model and can give a more accurate estimate of the log-likelihood of the model.
For the iterative refinement step, we use adaptive importance sampling [AIS, 17]. We demonstrate the
proposed refinement procedure is effective for training directed belief networks, providing a better
or competitive estimates of the log-likelihood. We also demonstrate the improved posterior from
refinement can improve inference and accuracy of evaluation for models trained by other methods.
2
Directed Belief Networks and Variational Inference
A directed belief network is a generative directed graphical model consisting of a conditional density
p(x|h) and a prior p(h), such that the joint density can be expressed as p(x, h) = p(x|h)p(h). In
particular, the joint density factorizes into a hierarchy of conditional densities and a prior: p(x, h) =
QL?1
p(x|h1 )p(hL ) l=1 p(hl |hl+1 ), where p(hl |hl+1 ) is the conditional density at the l-th layer and
p(hL ) is a prior distribution of the top layer. Sampling from the model can be done simply via
ancestral-sampling, first sampling from the prior, then subsequently sampling from each layer until
reaching the observation, x. This latent variable structure can improve model capacity, but inference
can still be intractable, as is the case in sigmoid belief networks [SBN, 15], deep belief networks
[DBN, 11], deep autoregressive networks [DARN, 7], and other models in which each of the
conditional distributions involves complex nonlinear functions.
2.1
Variational Lowerbound of Directed Belief Network
The objective we consider is the likelihood function, p(x; ?), where ? represent parameters of the
generative model (e.g. a directed belief network). Estimating the likelihood function given the joint
distribution, p(x, h; ?), above is not generally possible as it requires intractable marginalization over
h. Instead, we introduce an approximate posterior, q(h|x), as a proposal distribution. In this case,
the log-likelihood can be bounded from below? :
X
X
p(x, h)
p(x, h)
log p(x) =
log p(x, h) ?
q(h|x) log
= Eq(h|x) log
:= L1 , (1)
q(h|x)
q(h|x)
h
h
where we introduce the subscript in the lowerbound to make the connection to importance sampling
later. The bound is tight (e.g., L1 = log p(x)) when the KL divergence between the approximate and
true posterior is zero (e.g., DKL (q(h|x)||p(h|x)) = 0). The gradients of the lowerbound w.r.t. the
generative model can be approximated using the Monte Carlo approximation of the expectation:
?? L1 ?
K
1 X
?? log p(x, h(k) ; ?), h(k) ? q(h|x).
K
(2)
k=1
The success of variational inference lies on the choice of approximate posterior, as poor choice can
result in a looser variational bound. A deep feed-forward recognition network parameterized by ? has
become a popular choice, such that q(h|x) = q(h|x; ?), as it offers fast and flexible data-dependent
inference [see, e.g., 22, 12, 13, 20]. Generally known as a ?Helmholtz machine? [3], these approaches
often require additional tricks to train, as the naive Monte Carlo gradient of the lowerbound w.r.t.
the variational parameters has high variance. In addition, the variational lowerbound in Eq. (1) is
constrained by the assumptions implicit in the choice of approximate posterior, as the approximate
posterior must be within the capacity of the recognition network and factorial.
?
For clarity of presentation, we will often omit dependence on parameters ? of the generative model, so that
p(x, h) = p(x, h; ?)
2
Figure 1: Iterative refinement for variational inference. An initial estimate of the variational parameters is
made through a recognition network. The variational parameters are then updated iteratively, maximizing the
lowerbound. The final approximate posterior is used to train the generative model by sampling. The recognition
network parameters are updated using the KL divergence between the refined posterior qk and the output of the
recognition network q0 .
2.2
Importance Sampled Variational lowerbound
These assumptions can be relaxed by using an unbiased K-sampled importance weighted estimate of
the likelihood function (see [2] for details):
1 X p(x, h(k) )
1 X (k)
L1 ? LK =
=
w ? p(x),
(3)
(k)
K
K
q(h |x)
k=1
(k)
k=1
(k)
where h
? q(h|x) and w are the importance weights. This lowerbound is tighter than the
single-sample version provided in Eq. (1) and is an asymptotically unbiased estimate of the likelihood
as K ? ?.
The gradient of the lowerbound w.r.t. the model parameters ? is simple and can be estimated as:
?? L K =
K
X
w
? (k) ?? log p(x, h(k) ; ?),
k=1
w(k)
where w
? (k) = PK
.
(k0 )
k0 =1 w
(4)
The estimator in Eq. (3) can reduce the variance of the gradients, ?? LK , but in general additional
variance reduction is needed [14]. Alternatively, importance sampling yields an estimate of the
inclusive KL divergence, DKL (p(h|x)||q(h|x)), which can be used for training parameters ? of
the recognition network [1]. However, it is well known that importance sampling can yield heavilyskewed distributions over the importance weights [5], so that only a small number of the samples will
effectively have non-zero weight. This is consequential not only in training, but also for evaluating
models when using Eq. (3) to estimate test log-probabilities, which requires drawing a very large
number of samples (N ? 100, 000 in the literature for models trained on MNIST [7]).
The effective samples size, ne , of importance-weighted estimates increases and is optimal when the
approximate posterior matches the true posterior:
P
2
P
2
K
K
(k)
(k)
(k)
2
w
p(x,
h
)/p(h
|x)
k=1
k=1
(Kp(x))
ne = PK
? PK
?
= K.
(5)
2
(k) )2
Kp(x)2
p(x, h(k) )/p(h(k) |x)
k=1 (w
k=1
Conversely, importance sampling from a poorer approximate posterior will have lower effective
sampling size, resulting in higher variance of the gradient estimates. In order to improve the
effectiveness of importance sampling, we need a method for improving the approximate posterior
from those provided by the recognition network.
3
Iterative Refinement for Variational Inference (IRVI)
To address the above issues, iterative refinement for variational inference (IRVI) uses the recognition
network as a preliminary guess of the posterior, then refines the posterior through iterative updates of
the variational parameters. For the refinement step, IRVI uses a stochastic transition operator, g(.),
that maximizes the variational lowerbound.
3
An overview of IRVI is available in Figure 1. For the expectation (E)-step, we feed the observation x
through the recognition network to get the initial parameters, ?0 , of the approximate posterior,
q0 (h|x; ?). We then refine ?0 by applying T updates to the variational parameters, ?t+1 = g(?t , x),
iterating through T parameterizations ?1 , . . . , ?T of the approximate posterior qt (h|x).
With the final set of parameters, ?T , the gradient estimate of the recognition parameters ? in the
maximization (M)-step is taken w.r.t the negative exclusive KL divergence:
??? DKL (qT (h|x)||q0 (h|x; ?)) ?
K
1 X
?? log q0 (h(k) |x; ?),
K
(6)
k=1
where h(k) ? qT (h|x). Similarly, the gradients w.r.t. the parameters of the generative model ?
follow Eqs. (2) or (4) using samples from the refined posterior qT (h|x). As an alternative to Eq. (6),
we can maximize the negative inclusive KL divergence using the refined approximate posterior:
??? DKL (p(h|x)||q0 (h|x; ?)) ?
K
X
w
? (k) ?? log q0 (h(k) |x; ?).
(7)
k=1
The form of the IRVI transition operator, g(?t , x), depends on the problem. In the case of continuous
variables, we can make use of the VAE re-parameterization with the gradient of the lowerbound in
Eq. (1) for our refinement step (see supplementary material). However, as this is not available with
discrete units, we take a different approach that relies on adaptive importance sampling.
3.1
Adaptive Importance Refinement (AIR)
Adaptive importance sampling [AIS, 17] provides a general approach for iteratively refining the
variational parameters. For Bernoulli distributions, we observe that the mean parameter of the true
? can be written as the expected value of the latent variables:
posterior, ?,
K
? = Ep(h|x) [h] =
?
X
h p(h|x) =
h
p(x, h) X (k) (k)
1 X
q(h|x) h
?
w
? h .
p(x)
q(h|x)
h
(8)
k=1
? by applying
As the initial estimator typically has high variance, AIS iteratively moves ?t toward ?
PK
Eq. 8 until a stopping criteria is met. While using the update, g(?t , x, ?) = k=1 w
? (k) h(k) in
principle works, a convex combination of importance sample estimate of the current step and the
parameters from the previous step tends to be more stable:
h(m) ? Bernoulli(?k );
?t+1 = g(?t , x, ?) = (1 ? ?)?t + ?
K
X
w
? (k) h(k) .
(9)
k=1
Here, ? is the inference rate and (1 ? ?) can be thought of as the adaptive ?damping? rate. This
approach, which we call adaptive importance refinement (AIR), should work with any discrete
parametric distribution. Although AIR is applicable with continuous Gaussian variables, which
model second-order statistics, we leave adapting AIR to continuous latent variables for future work.
3.2
Algorithm and Complexity
The general AIR algorithm follows Algorithm 1 with gradient variations following Eqs. (2), (4),
(6), and (7). While iterative refinement may reduce the variance of stochastic gradient estimates
and speed up learning, it comes at a computational cost, as each update is T times more expensive than fixed approximations. However, in addition to potential learning benefits, AIR can also
improve the approximate posterior of an already trained directed belief networks at test, independent on how the model was trained. Our implementation following Algorithm 1 is available at
https://github.com/rdevon/IRVI.
4
Related Work
Adaptive importance refinement (AIR) trades computation for expressiveness and is similar in
this regard to the refinement procedure of hybrid MCMC for variational inference [HVI, 24] and
4
Algorithm 1 AIR
Require: A generative model p(x, h; ?) = p(x|h; ?)p(h; ?) and a recognition network ?0 = f (x; ?)
Require: A transition operator g(?, x, ?) and inference rate ?.
Compute ?0 = f (x; ?) for q0 (h|x; ?)
for t=1:T do
Draw K samples h(k) ? qt (h|x) and compute normalized importance weights w
? (k)
PK
(k) (k)
? h
?t = (1 ? ?)?t?1 + ? k=1 w
end for
if reweight then
P
? (k) ?? log p(x, h(k) ; ?)
?? ? K
k=1 w
else
PK
(k)
1
?? ? K
; ?)
k=1 ?? log p(x, h
end if
if inclusive KL Divergence then
P
? (k) ?? log q0 (h(k) |x; ?)
?? ? K
k=1 w
else
PK
(k)
1
?? ? K
|x; ?)
k=1 ?? log q0 (h
end if
normalizing flows for VAE [NF, 21]. HVI has a similar complexity as AIR, as it requires re-estimating
the lowerbound at every step. While NF can be less expensive than AIR, both HVI and NF rely on
the VAE re-parameterization to work, and thus cannot be applied to discrete variables. Sequential
importance sampling [SIS, 5] can offer a better refinement step than AIS but typically requires
resampling to control variance. While parametric versions exist that could be applicable to training
directed graphical models with discrete units [8, 18], their applicability as a general refinement
procedure is limited as the refinement parameters need to be learned.
Importance sampling is central to reweighted wake-sleep [RWS, 1], importance-weighted autoencoders [IWAE, 2], variational inference for Monte Carlo objectives [VIMCO, 14], and recent work on
stochastic feed-forward networks [SFFN, 26, 19]. While each of these methods are competitive, they
rely on importance samples from the recognition network and do not offer the low-variance estimates
available from AIR. Neural variational inference and learning [NVIL, 13] is a single-sample and
biased version of VIMCO, which is greatly outperformed by techniques that use importance sampling.
Both NVIL and VIMCO reduce the variance of the Monte Carlo estimates of gradients by using an
input-dependent baseline, but this approach does not necessarily provide a better posterior and cannot
be used to give better estimates of the likelihood function or expectations.
Finally, IRVI is meant to be a general approach to refining the approximate posterior. IRVI is not
limited to the refinement step provided by AIR, and many different types of refinement steps are
available to improve the posterior for models above (see supplementary material for the continuous
case). SIS and sequential importance resampling [SIR, 6] can be used as an alternative to AIR and
may provide a better refinement step for IRVI.
5
Experiments
We evaluate iterative refinement for variational inference (IRVI) using adaptive importance refinement
(AIR) for both training and evaluating directed belief networks. We train and test on the following
benchmarks: the binarized MNIST handwritten digit dataset [23] and the Caltech-101 Silhouettes
dataset. We centered the MNIST and Caltech datasets by subtracting the mean-image over the
training set when used as input to the recognition network. We also train additional models using the
re-weighted wake-sleep algorithm [RWS, 1], the state of the art for many configurations of directed
belief networks with discrete variables on these datasets for comparison and to demonstrate improving
the approximate posteriors with refinement. With our experiments, we show that 1) IRVI can train
a variety of directed models as well or better than existing methods, 2) the gains from refinement
improves the approximate posterior, and can be applied to models trained by other algorithms, and 3)
IRVI can be used to improve a model with a relatively simple approximate posterior.
Models were trained using the RMSprop algorithm [10] with a batch size of 100 and early stopping
by recorded best variational lower bound on the validation dataset. For AIR, 20 ?inference steps"
5
Figure 2: The log-likelihood (left) and normalized effective sample size (right) with epochs in log-scale on the
training set for AIR with 5 and 20 refinement steps (vanilla AIR), reweighted AIR with 5 and 20 refinement
steps, reweighted AIR with inclusive KL objective and 5 or 20 refinement steps, and reweighted wake-sleep
(RWS), all with a single stochastic latent layer. All models were evaluated with 100 posterior samples, their
respective number of refinement steps for the effective sample size (ESS), and with 20 refinement steps of AIR
for the log-likelihood. Despite longer wall-clock time per epoch,
(K = 20), 20 adaptive samples (M = 20), and an adaptive damping rate, (1 ? ?), of 0.9 were used
during inference, chosen from validation in initial experiments. 20 posterior samples (N = 20) were
used for model parameter updates for both AIR and RWS. All models were trained for 500 epochs
and were fine-tuned for an additional 500 with a decaying learning rate and SGD.
We use a generative model composed of a) a factorized Bernoulli prior as with sigmoid belief networks
(SBNs) or b) an autoregressive prior, as in published MNIST results with deep autoregressive networks
[DARN, 7]:
a) p(h) =
Y
p(hi ); P (hi = 1) = ?(bi ),
i?1
X
b) P (hi = 1) = ?( (Wri,j<i hj<i ) + bi ),
i
(10)
j=0
where ? is the sigmoid (?(x) = 1/(1 + exp(?x))) function, Wr is a lower-triangular square matrix,
and b is the bias vector.
For our experiments, we use conditional and approximate posterior densities that follow Bernoulli
distributions:
P (hi,l = 1|hl+1 ) = ?(Wli,: ? hl+1 + bi,l ),
(11)
where Wl is a weight matrix between the l and l + 1 layers. As in Gregor et al. [7] with MNIST, we
do not use autoregression on the observations, x, and use a fully factorized approximate posterior.
5.1
Variance Reduction and Choosing the AIR Objective
The effective sample size (ESS) in Eq. (5) is a good indicator of the variance of gradient estimate. In
Fig. 2 (right), we observe that the ESS improves as we take more AIR steps when training a deep
belief network (AIR(5) vs AIR(20)). When the approximate posterior is not refined (RWS), the ESS
stays low throughout training, eventually resulting in a worse model. This improved ESS reveals
itself as faster convergence in terms of the exact log-likelihood in the left panel of Fig. 2 (see the
progress of each curve until 100 epochs. See also supplementary materials for wall-clock time.)
This faster convergence does not guarantee a good final log-likelihood, as the latter depends on the
tightness of the lowerbound rather than the variance of its estimate. This is most apparent when
comparing AIR(5), AIR+RW(5) and AIR+RW+IKL(5). AIR(5) has a low variance (high ESS) but
computes the gradient of a looser lowerbound from Eq. (2), while the other two compute the gradient
of a tighter lowerbound from Eq. (4). This results in AIR(5) converging faster than the other two,
while the final log-likelihood estimates are better for the other two.
We however observe that the final log-likelihood estimates are comparable across all three variants
(AIR, AIR+RW and AIR+RW+IKL) when a sufficient number of AIR steps are taken so that L1 is
sufficiently tight. When 20 steps were taken, we observe that the AIR(20) converges faster as well as
achieves a better log-likelihood compared to AIR+RW(20) and AIR+RW+IKL(20). Based on these
observations, we use vanilla AIR (subsequently just ?AIR?) in our following experiments.
6
Table 1: Results for adaptive importance sampling iterative refinement (AIR), reweighted wake-sleep (RWS),
and RWS with refinement with AIR at test (RWS+) for a variety of model configurations. Additional sigmoid
belief networks (SBNs) trained with neural variational inference and learning (NVIL) from ?Mnih and Gregor
[13] and variational inference for Monte Carlo objectives (VIMCO) from ?Mnih and Rezende [14]. AIR is
trained with 20 inference steps and adaptive samples (K = 20, M = 20) in training (*3 layer SBN was trained
with 50 steps with a inference rate of 0.05). NVIL DARN results are from fDARN and VIMCO was trained
using 50 posterior samples (as opposed to 20 with AIR and RWS).
Model
SBN 200
SBN 200-200
SBN 200-200-200
DARN 200
DARN 500
5.2
RWS
102.51
93.82
92.00
86.91
85.40
RWS+
102.00
92.83
91.02
86.21
84.71
MNIST
AIR
100.92
92.90
92.56?
85.89
85.46
NVIL?
113.1
99.8
96.7
92.5?
90.7?
VIMCO?
?
?
90.9?
?
?
Caltech-101 Silhouettes
RWS
RWS+
AIR
121.38 118.63 116.61
112.86 107.20 106.94
110.57 104.54 104.36
113.69 109.73 109.76
?
?
?
Training and Density Estimation
We evaluate AIR for training SBNs with one, two, and three layers of 200 hidden units and DARN
with 200 and 500 hidden units, comparing against our implementation of RWS. All models were
tested using 100, 000 posterior samples to estimate the lowerbounds and average test log-probabilities.
When training SBNs with AIR and RWS, we used a completely deterministic network for the
approximate posterior. For example, for a 2-layer SBN, the approximate posterior factors into the
approximate posteriors for the top and the bottom hidden layers, and the initial variational parameters
(2)
(1)
of the top layer, ?0 are a function of the initial variational parameters of the first layer, ?0 :
(1)
(2)
q0 (h1 , h2 |x) = q0 (h1 |x; ?0 )q(h2 |x; ?0 );
(1)
?0 = f1 (x; ? 1 );
(2)
(1)
?0 = f2 (?0 ; ? 2 ). (12)
For DARN, we trained two different configurations on MNIST: one with 500 stochastic units and an
additional hyperbolic tangent deterministic layer with 500 units in both the generative and recognition
networks, and another with 200 stochastic units with a 500 hyperbolic tangent deterministic layer in
the generative network only. We used DARN with 200 units with the Caltech-101 silhouettes dataset.
The results of our experiments with the MNIST and Caltech-101 silhouettes datasets trained with
AIR, RWS, and RWS refined at test with AIR (RWS+) are in Table 1. Refinement at test (RWS+)
always improves the results for RWS. As our unrefined results are comparable to those found in
Bornschein and Bengio [1], the improved results indicate many evaluations of Helmholtz machines in
the literature could benefit from refinement with AIR to improve evaluation accuracy. For most model
configurations, AIR and RWS perform comparably, though RWS appears to do better in the average
test log-probability estimates for some configurations of MNIST. RWS+ performs comparably with
variational inference for Monte Carlo objectives [VIMCO, 14], despite the reported VIMCO results
relying on more posterior samples in training. Finally, AIR results approach SOTA with Caltech-101
silhouettes with 3-layer SBNs against neural autoregressive distribution estimator [NADE, 1].
We also tested our log-probability estimates against the exact log-probability (by marginalizing
over the joint) of smaller single-layer SBNs with 20 stochastic units. The exact log-probability was
?127.474 and our estimate with the unrefined approximate was ?127.51 and ?127.48 with 100
refinement steps. Overall, this result is consistent with those of Table 1, that iterative refinement
improves the accuracy of log-probability estimates.
5.3
Posterior Improvement
In order to visualize the improvements due to refinement and to demonstrate AIR as a general means
of improvement for directed models at test, we generate N samples from the approximate posterior
without (h ? q0 (h|x; ?)) and with refinement (h ? qT (h|x)), from a single-layer SBN with 20
stochastic units originally trained with RWS. We then use the samples from the approximate posterior
PN
to compute the expected conditional probability or average reconstruction: N1 n=1 p(x|h(n) ). We
used a restricted model with a lower number of stochastic units to demonstrate that refinement also
works well with simple models, where the recognition network is more likely to ?average? over latent
configurations, giving a misleading evaluation of the model?s generative capability.
7
PN
p(x|h(n) ), for h(n) sampled from the output of the
recognition network, q0 (h|x) (middle row) against those sampled from the refined posterior, qT (h|x) (bottom
row) for T = 20 with a model trained on MNIST. Top row is ground truth. Among the digits whose reconstruction
changes the most, many changes correctly reveal the identity of the digit. Bottom: Average reconstructions
for a single-layer model with 200 trained on Caltech-101 silhouettes. Instead of using the posterior from the
recognition network, we derived a simpler version, setting 80% of the variational parameters from the recognition
network to 0.5, then applied iterative refinement.
Figure 3: Top: Average reconstructions, 1/N
n=1
We also refine the approximate posterior of a simplified version of the recognition network of a
single-layer SBN with 200 units trained with RWS. We simplified the approximate posterior by first
computing ?0 = f (x; ?), then randomly setting 80% of the variational parameters to 0.5.
Fig. 3 shows improvement from refinement for 25 digits from the MNIST test dataset, where the
samples chosen were those of which the expected reconstruction error of the original test sample
was the most improved. The digits generated from the refined posterior are of higher quality, and in
many cases the correct digit class is revealed. This shows that, in many cases where the recognition
network indicates that the generative model cannot model a test sample correctly, refinement can
more accurately reveal the model?s capacity. With the simplified approximate posterior, refinement is
able to retrieve most of the shape of images from the Caltech-101 silhouettes, despite only starting
with 20% of the original parameters from the recognition network. This indicates that the work of
inference need not all be done via a complex recognition network: iterative refinement can be used to
aid in inference with a relatively simple approximate posterior.
6
Conclusion
We have introduced iterative refinement for variational inference (IRVI), a simple, yet effective and
flexible approach for training and evaluating directed belief networks that works by improving the
approximate posterior from a recognition network. We demonstrated IRVI using adaptive importance
refinement (AIR), which uses importance sampling at each iterative step, and showed that AIR can
be used to provide low-variance gradients to efficiently train deep directed graphical models. AIR
can also be used to more accurately reveal the generative model?s capacity, which is evident when
the approximate posterior is of poor quality. The improved approximate posterior provided by AIR
shows an increased effective samples size, which is a consequence of a better approximation of the
true posterior and improves the accuracy of the test log-probability estimates.
7
Acknowledgements
This work was supported by Microsoft Research to RDH under NJ; NIH P20GM103472, R01 grant
REB020407, and NSF grant 1539067 to VDC; and ONR grant N000141512791 and ADeLAIDE
grant FA8750-16C-0130-001 to RS. KC was supported in part by Facebook, Google (Google Faculty
Award 2016) and NVidia (GPU Center of Excellence 2015-2016), and RDH was supported in part by
PIBBS.
References
[1] J?rg Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv preprint arXiv:1406.2751, 2014.
[2] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint
arXiv:1509.00519, 2015.
8
[3] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural
computation, 7(5):889?904, 1995.
[4] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the
em algorithm. Journal of the royal statistical society. Series B (methodological), 1977.
[5] Arnaud Doucet, Nando De Freitas, and Neil Gordon. An introduction to sequential monte carlo methods.
In Sequential Monte Carlo methods in practice, pages 3?14. Springer, 2001.
[6] Neil J Gordon, David J Salmond, and Adrian FM Smith. Novel approach to nonlinear/non-gaussian
bayesian state estimation. In Radar and Signal Processing, IEE Proceedings F, volume 140, pages
107?113. IET, 1993.
[7] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive
networks. arXiv preprint arXiv:1310.8499, 2013.
[8] Shixiang Gu, Zoubin Ghahramani, and Richard E Turner. Neural adaptive sequential monte carlo. In
Advances in Neural Information Processing Systems, pages 2611?2619, 2015.
[9] Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for
stochastic neural networks. arXiv preprint arXiv:1511.05176, 2015.
[10] Geoffrey Hinton. Neural networks for machine learning. Coursera, video lectures, 2012.
[11] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets.
Neural computation, 18(7):1527?1554, 2006.
[12] Diederik Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[13] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In Proceedings
of the 31st International Conference on Machine Learning (ICML-14), pages 1791?1799, 2014.
[14] Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprint
arXiv:1602.06725, 2016.
[15] Radford M Neal. Connectionist learning of belief networks. Artificial intelligence, 56(1), 1992.
[16] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justifies incremental, sparse, and
other variants. In Learning in graphical models, pages 355?368. Springer, 1998.
[17] Man-Suk Oh and James O Berger. Adaptive importance sampling in monte carlo integration. Journal of
Statistical Computation and Simulation, 41(3-4):143?168, 1992.
[18] Brooks Paige and Frank Wood. Inference networks for sequential monte carlo in graphical models. arXiv
preprint arXiv:1602.06701, 2016.
[19] Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary
stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
[20] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate
inference in deep generative models. In Proceedings of the 31st International Conference on Machine
Learning (ICML-14), pages 1278?1286, 2014.
[21] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv
preprint arXiv:1505.05770, 2015.
[22] Ruslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pages 693?700, 2010.
[23] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings
of the 25th international conference on Machine learning, pages 872?879. ACM, 2008.
[24] Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference:
Bridging the gap. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1218?1226. JMLR Workshop and Conference Proceedings,
2015. URL http://jmlr.org/proceedings/papers/v37/salimans15.pdf.
[25] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks.
Journal of artificial intelligence research, 4(1):61?76, 1996.
[26] Yichuan Tang and Ruslan R Salakhutdinov. Learning stochastic feedforward neural networks. In Advances
in Neural Information Processing Systems, pages 530?538, 2013.
9
| 6311 |@word middle:1 faculty:1 version:5 consequential:1 nd:1 adrian:1 r:1 simulation:1 sgd:1 reduction:2 initial:6 configuration:6 series:1 jimenez:1 tuned:1 fa8750:1 existing:1 freitas:1 current:1 com:2 comparing:2 si:2 yet:1 diederik:2 must:1 written:1 gpu:1 refines:1 shape:1 update:5 resampling:2 nebojsa:1 generative:16 v:1 guess:1 intelligence:3 parameterization:4 ivo:1 es:6 smith:1 blei:1 provides:3 parameterizations:1 toronto:1 org:3 simpler:1 wierstra:2 become:1 introduce:3 excellence:1 expected:3 salakhutdinov:5 relying:1 rws:25 rdh:2 spain:1 provided:5 bounded:2 estimating:2 maximizes:1 factorized:2 panel:1 what:1 kind:1 nj:1 guarantee:1 quantitative:1 every:1 binarized:1 nf:3 facto:1 control:1 unit:13 grant:4 omit:1 danihelka:1 tends:1 consequence:1 despite:3 encoding:1 subscript:1 laurent:1 conversely:1 wri:1 limited:3 bi:3 lowerbound:16 directed:19 practical:1 practice:1 backpropagation:2 digit:6 procedure:5 thought:1 adapting:1 hyperbolic:2 donald:1 zoubin:1 get:1 cannot:3 operator:3 applying:2 yee:1 deterministic:3 demonstrated:1 center:2 maximizing:1 starting:1 convex:1 iwae:1 estimator:7 iain:1 oh:1 retrieve:1 coordinate:1 variation:1 updated:2 hierarchy:1 exact:3 us:3 trick:1 helmholtz:5 recognition:33 approximated:1 expensive:2 ep:1 bottom:3 levine:1 preprint:9 sbn:8 coursera:1 trade:1 dempster:1 complexity:2 rmsprop:1 radar:1 trained:18 raise:1 tight:3 f2:1 completely:1 gu:2 easily:1 joint:4 k0:2 train:7 fast:3 effective:9 monte:17 kp:2 artificial:3 zemel:1 choosing:1 refined:9 apparent:1 whose:1 supplementary:3 drawing:1 calhoun:1 tightness:1 triangular:1 statistic:2 neil:2 itself:1 laird:1 final:5 shakir:2 advantage:1 net:1 bornschein:2 reconstruction:5 subtracting:1 flexibility:1 representational:1 sutskever:1 convergence:2 karol:2 incremental:1 leave:1 converges:1 tim:1 montreal:1 qt:7 progress:1 eq:13 c:1 involves:1 implies:1 come:2 met:1 indicate:1 larochelle:1 tommi:1 correct:1 subsequently:2 stochastic:13 centered:1 nando:1 material:3 require:3 f1:1 wall:2 preliminary:1 tighter:2 vimco:8 sufficiently:1 ground:1 exp:1 vince:1 lawrence:1 visualize:1 achieves:1 hvi:3 early:1 estimation:2 ruslan:4 outperformed:1 applicable:2 wl:1 weighted:5 gaussian:2 always:1 reaching:1 rather:1 pn:2 hj:1 factorizes:1 vae:4 jaakkola:1 rezende:4 derived:1 refining:3 improvement:4 methodological:1 bernoulli:4 likelihood:16 indicates:2 greatly:1 baseline:3 inference:32 dependent:3 stopping:2 dayan:1 typically:2 hidden:3 kc:1 issue:2 overall:1 flexible:3 among:1 art:2 constrained:1 integration:1 field:2 sampling:22 icml:3 future:1 yoshua:1 connectionist:1 gordon:2 richard:2 randomly:1 composed:1 divergence:6 consisting:1 microsoft:3 n1:1 wli:1 mnih:6 evaluation:4 nvil:5 chain:2 accurate:1 poorer:1 arthur:1 respective:1 damping:2 incomplete:1 taylor:1 initialized:1 re:5 increased:2 maximization:2 cost:1 applicability:1 successful:1 osindero:1 too:1 iee:1 reported:1 cho:2 st:2 density:7 international:5 ancestral:1 stay:1 michael:1 ilya:1 n000141512791:1 central:1 recorded:1 opposed:1 berglund:1 worse:1 chung:2 strive:1 potential:1 de:2 depends:2 later:1 h1:3 view:1 francis:1 competitive:3 decaying:1 bayes:1 capability:1 simon:1 air:57 square:1 accuracy:4 variance:21 qk:1 efficiently:1 yield:2 handwritten:1 bayesian:1 rus:1 comparably:2 accurately:2 carlo:17 published:1 facebook:1 against:4 mohamed:2 james:1 sampled:4 gain:1 dataset:5 popular:1 improves:5 appears:1 feed:3 higher:2 courant:1 originally:1 follow:2 danilo:3 improved:5 done:2 evaluated:1 though:1 just:1 implicit:1 roger:1 autoencoders:3 until:3 clock:2 nonlinear:2 ikl:3 google:2 quality:3 reveal:3 normalized:2 unbiased:5 true:5 kyunghyun:2 jojic:2 q0:13 iteratively:5 arnaud:1 neal:3 reweighted:6 during:2 shixiang:2 criterion:1 whye:1 pdf:1 evident:2 complete:1 demonstrate:6 darn:8 performs:1 l1:5 image:2 variational:39 novel:1 umontreal:1 nih:1 sigmoid:5 charles:1 hugo:1 overview:1 volume:1 significant:1 dinh:1 ai:4 vanilla:2 dbn:1 similarly:1 stable:1 longer:1 posterior:61 recent:3 showed:1 nvidia:1 onr:1 success:1 binary:1 yuri:1 caltech:8 additional:6 relaxed:1 tapani:1 mrn:2 maximize:1 signal:2 sbns:6 match:1 faster:4 offer:4 bach:1 sota:1 award:1 dkl:4 converging:1 variant:2 expectation:4 surpassed:1 arxiv:18 represent:1 sergey:1 proposal:2 addition:2 fine:1 else:2 wake:5 biased:1 ascent:1 flow:2 effectiveness:1 jordan:1 call:1 revealed:1 bengio:2 feedforward:2 variety:2 marginalization:1 fm:1 andriy:4 reduce:3 blundell:1 bridging:1 url:1 peter:1 paige:1 york:1 deep:11 generally:2 iterating:1 factorial:1 rw:6 http:2 generate:1 exist:1 nsf:1 estimated:1 per:1 wr:1 correctly:2 carnegie:1 discrete:6 reliance:1 clarity:1 asymptotically:2 wood:1 parameterized:1 throughout:1 muprop:1 looser:2 draw:1 comparable:2 bound:3 hi:4 layer:18 nan:1 melon:1 sleep:5 refine:2 constrain:1 inclusive:4 speed:1 expanded:1 relatively:2 combination:2 poor:2 across:1 smaller:1 em:3 rsalakhu:1 hl:8 restricted:1 taken:3 turn:1 eventually:1 needed:1 mind:2 end:3 available:8 autoregression:1 observe:4 salimans:1 alternative:2 batch:1 original:2 top:5 graphical:9 exploit:1 giving:1 ghahramani:1 murray:1 gregor:4 r01:1 society:1 objective:8 move:1 already:1 parametric:3 dependence:1 exclusive:1 traditional:1 gradient:19 capacity:7 toward:1 berger:1 providing:1 mexico:2 ql:1 frank:1 lowerbounds:1 reweight:1 unrefined:2 negative:2 implementation:2 boltzmann:1 perform:1 teh:1 observation:4 markov:2 datasets:4 benchmark:1 daan:2 hinton:4 expressiveness:1 introduced:1 david:2 kl:7 connection:1 devon:1 learned:1 barcelona:1 kingma:2 nip:1 brook:1 address:2 able:1 below:1 royal:1 max:2 video:1 belief:20 power:1 rely:3 hybrid:1 indicator:1 turner:1 improve:7 github:1 misleading:1 ne:2 lk:2 raiko:1 naive:2 auto:1 prior:6 literature:2 epoch:4 tangent:2 acknowledgement:1 marginalizing:1 sir:1 fully:1 lecture:1 geoffrey:4 validation:2 h2:2 sufficient:1 consistent:1 rubin:1 principle:1 editor:1 row:3 supported:3 alain:1 bias:1 burda:1 salmond:1 institute:1 saul:1 sparse:1 benefit:2 regard:1 curve:1 evaluating:3 transition:3 autoregressive:5 computes:1 forward:2 made:1 refinement:49 adaptive:15 simplified:3 welling:2 approximate:41 yichuan:1 silhouette:7 doucet:1 reveals:1 alternatively:1 continuous:4 iterative:15 latent:6 vdc:1 iet:1 table:3 ca:1 improving:4 expansion:1 complex:2 necessarily:1 pk:7 fig:3 junyoung:2 nade:1 grosse:1 aid:1 lie:1 jmlr:2 tang:1 nyu:1 normalizing:2 intractable:2 workshop:1 mnist:11 sequential:6 effectively:1 importance:32 justifies:1 gap:1 rg:1 simply:1 likely:1 expressed:1 radford:3 springer:2 truth:1 relies:2 acm:1 conditional:6 identity:1 presentation:1 hjelm:1 man:1 change:2 reducing:2 pas:1 mathias:1 guillaume:1 latter:1 meant:1 adelaide:1 evaluate:2 mcmc:2 tested:2 |
5,872 | 6,312 | Mapping Estimation for Discrete Optimal Transport
Micha?el Perrot
Univ Lyon, UJM-Saint-Etienne, CNRS,
Lab. Hubert Curien UMR 5516, F-42023
[email protected]
R?emi Flamary
Universit?e C?ote d?Azur,
Lagrange, UMR 7293 , CNRS, OCA
[email protected]
Nicolas Courty
Universit?e de Bretagne Sud,
IRISA, UMR 6074, CNRS,
[email protected]
Amaury Habrard
Univ Lyon, UJM-Saint-Etienne, CNRS,
Lab. Hubert Curien UMR 5516, F-42023
[email protected]
Abstract
We are interested in the computation of the transport map of an Optimal Transport
problem. Most of the computational approaches of Optimal Transport use the
Kantorovich relaxation of the problem to learn a probabilistic coupling ? but do
not address the problem of learning the underlying transport map T linked to
the original Monge problem. Consequently, it lowers the potential usage of such
methods in contexts where out-of-samples computations are mandatory. In this
paper we propose a new way to jointly learn the coupling and an approximation of
the transport map. We use a jointly convex formulation which can be efficiently
optimized. Additionally, jointly learning the coupling and the transport map allows
to smooth the result of the Optimal Transport and generalize it to out-of-samples
examples. Empirically, we show the interest and the relevance of our method in
two tasks: domain adaptation and image editing.
1
Introduction
In recent years Optimal Transport (OT) [1] has received a lot of attention in the machine learning
community [2, 3, 4, 5]. This gain of interest comes from several nice properties of OT when used
as a divergence to compare discrete distributions: (i) it provides a sound and theoretically grounded
way of comparing multivariate probability distributions without the need for estimating parametric
versions and (ii) by considering the geometry of the underlying space through a cost metric, it can
encode useful information about the nature of the problem.
OT is usually expressed as an optimal cost functional but it also enjoys a dual variational formulation [1, Chapter 5]. It has been proven useful in several settings. As a first example it corresponds to
the Wasserstein distance in the space of probability distributions. Using this distance it is possible to
compute means and barycentres [6, 7] or to perform a PCA in the space of probability measures [8].
This distance has also been used in subspace identification problems for analysing the differences
between distributions [9], in graph based semi-supervised learning to propagate histogram labels
across nodes [4] or as a way to define a loss function for multi-label learning [5]. As a second example
OT enjoys a variety of bounds for the convergence rate of empirical to population measures which can
be used to derive new probabilistic bounds for the performance of unsupervised learning algorithms
such as k-means [2]. As a last example OT is a mean of interpolation between distributions [10] that
has been used in Bayesian inference [11], color transfer [12] or domain adaptation [13].
On the computational side, despite some results with finite difference schemes [14], one of the major
gain is the recent development of regularized versions that leads to efficient algorithms [3, 7, 15]. Most
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
OT formulations are based on the computation of a (probabilistic) coupling matrix that can be seen
as a bi-partite graph between the bins of the distributions. This coupling, also called transportation
matrix, corresponds to an empirical transport map which suffers from some drawbacks: it can only be
applied to the examples used to learn it. In other words when a new dataset (or sample) is available,
one has to recompute an OT problem to deal with the new instances which can be prohibitive for some
applications in particular when the task is similar or related. From a machine learning standpoint, this
also means that we do not know how to find a good approximation of a transport map computed from
a small sample that can be generalized to unseen data. This is particularly critical when one considers
medium or large scale applications such as image editing problems. In this paper, we propose to
bridge this gap by learning an explicit transformation that can be interpreted as a good approximation
of the transport map. As far as we know, this is the first approach that addresses directly this problem
of out-of-sample mapping.
Our formulation is based on classic regularized regression and admits two appealing interpretations.
On the one hand, it can be seen as learning a transformation regularized by a transport map. On the
other hand, we can see it as the computation of the transport map regularized w.r.t. the definition
of a transformation (e.g. linear, non-linear, . . . ). This results in an optimization problem that
jointly learns both the transport map and the transformation. This formulation can be efficiently
solved thanks to alternating block-coordinate descent and actually benefits the two models: (i) we
obtain smoother transport maps that must be compliant with a transformation that can be used on
out-of-sample examples and (ii) the transformation is able to take into account some geometrical
information captured by OT. See Figure 1 for an illustration. We provide some empirical evidence for
the usefulness of our approach in domain adaptation and image editing. Beyond that, we think that
this paper can open the door to new research on the generalization ability of OT.
The rest of the paper is organized as follows. Section 2 introduces some notations and preliminaries
in optimal transport. We present our approach in Section 3. Our experimental evaluation is given in
Section 4 and we conclude in Section 5.
2
Background
Monge problem Let ?S ? Rds and ?T ? Rdt be two separable metric spaces such that any
probability measure on ?S , respectively ?T , is a Radon measure. By considering a cost function
c : ?S ? ?T ? [0, ?[, Monge?s formulation of the OT problem is to find a transport map
T : ?S ? ?T (also known as a push-forward operator) between two probability measures ?S on
?S and ?T on ?T realizing the infimum of the following function:
Z
inf
c(x, T (x))d?S (x), T #?S = ?T .
(1)
?S
When reaching this infimum, the corresponding map T is an optimal transport map. It associates one
point from ?S to a single point in ?T . Therefore, the existence of this map is not always guaranteed,
as when for example ?S is a Dirac and ?T is not. As such, the existence of solutions for this problem
can in general not be established when ?S and ?T are supported on a different number of Diracs. Yet,
in a machine learning context, data samples usually form discrete distributions, but can be seen as
observations of a regular, continuous (with respect to the Lebesgue measure) underlying distribution,
thus fulfilling existence conditions (see [1, Chapter 9]). As such, assuming the existence of T calls
for a relaxation of the previous problem.
Kantorovich relaxation The Kantorovitch formulation of OT [16] is a convex relaxation of the
Monge problem. Let us define ? as the set of all probabilistic couplings in P(?S ? ?T ), the space
of all joint distributions with marginals ?S and ?T . The Kantorovitch problem seeks for a general
coupling ? ? ? between ?S and ?T :
Z
?0 = arg min
c(xs , xt )d?(xs , xt ).
(2)
???
?S ??T
The optimal coupling always exists [1, Theorem 4.1]. This leads to a simple formulation of the
OT problem in the discrete case, i.e. whenever ?S and ?T are only accessible through discrete
s ns
t nt
samples Xs = {xP
corresponding empirical distributions can be
i }ni=1 , and Xt = {xi }i=1
Pn.t The
s
s s
t
written as ?
?S =
?T =
i=1 pi ?xi and ?
i=1 pi ?xti where ?x is the Dirac function at location
2
Figure 1: Illustration of the mappings estimated on the clown dataset with a linear (top) and nonlinear
(bottom) mapping (best viewed in color).
x ? ?. psi and pti are probability masses associated to the i-th sample and belong to the probability
Pns s Pnt t
? be the set of probabilistic couplings between the two
pi = i=1 pi = 1. Let ?
simplex, i.e. i=1
?
empirical distributions defined as ? = ? ? (R+ )ns ?nt | ?1nt = ?
?S , ? T 1ns = ?
?T where 1n is a
n-dimensional vector of ones. Problem 2 becomes:
?0 = arg min
?
???
h?, CiF ,
(3)
where h?, ?iF is the Frobenius dot product1 and C ? 0 is the cost matrix related to the function c.
Barycentric mapping Once the probabilistic coupling ?0 has been computed, one needs to map
the examples from ?S to ?T . This mapping can be conveniently expressed with respect to the set of
examples Xt as the following barycentric mapping [11, 13, 12]:
nt
X
csi = arg min
x
?0 (i, j)c(x, xtj ),
(4)
x??T
j=1
xsi
csi is its corresponding image. When the cost function is the
where
is a given source sample and x
2
0
squared `2 distance, i.e. c(x, x ) = kx ? x0 k2 , this barycentre corresponds to a weighted average
and the sample is mapped into the convex hull of the target examples. For all source samples, this
barycentric mapping can therefore be expressed as:
cs = B? (Xs ) = diag(?0 1n )?1 ?0 Xt .
X
(5)
0
t
In the rest of the paper we will focus on a uniform sampling, i.e. the examples are drawn i.i.d.
cs = ns ?0 Xt . The main drawback of the mapping (5) is that it does not
from ?S and ?T , whence X
allow the projection of out-of-sample examples which do not have been seen during the learning
process of ?0 . It means that to transport a new example xs ? ?S one has to compute the coupling
matrix ?0 again using this new example. Also, while some authors consider specific regularization of
? [3, 13] to control the nature of the coupling, inducing specific properties of the transformation T
(i.e. regularity, divergence free, etc.) is hard to achieve.
In the next section we present a relaxation of the OT problem, which consists in jointly learning ? and
T . We derive the corresponding optimization problem, and show its usefulness in specific scenarios.
3
Contributions
Joint learning of T and ?
3.1
In this paper we propose to solve the problem of optimal transport by jointly learning the matrix ?
and the transformation function T . First of all, we denote H the space of transformations from ?T
1
hA, BiF = Tr(AT B)
3
to ?T and using a slight abuse of notations Xs and Xt are matrices where each line is an example
respectively drawn from ?S and ?T . We propose the following optimisation problem:
1
??
?T
2
arg min f (?, T ) =
kT (Xs ) ? ns ?Xt kF +
h?, CiF +
R(T )
(6)
ns dt
max(C)
ds dt
?
T ?H,???
where T (Xs ) is a short-hand for the application of T on each example in Xs , R(?) is a regularization
term on T and ?? , ?T are hyper-parameters controlling the trade-off between the three terms in the
optimization problem. The first term in (6) depends on both T and ? and controls the closeness
between the transformation induced by T and the barycentric interpolation obtained from ?. The
second term only depends on ? and corresponds to the standard optimal transport loss. The third term
regularizes T to ensure a better generalization.
A standard approach to solve problem (6) is to use block-coordinate descent (BCD) [17], where the
idea is to alternatively optimize for T and ?. In the next theorem we show that under some mild
assumptions on the regularization term R(?) and the function space H this problem is jointly convex.
Note that in this case we are guaranteed to converge to the optimal solution only if we are strictly
convex w.r.t. T and ?. While this is not the case for ?, the algorithm works well in practice and
a small regularization term can be added if theoretical convergence is required. The proof of the
following theorem can be found in the supplementary.
Theorem 1. Let H be a convex space and R(?) be a convex function. Problem (6) is jointly convex
in T and ?.
As discussed above we propose to solve optimization problem (6) using a block coordinate descent
approach. As such we need to find an efficient way to solve: (i) for ? when T is fixed and (ii) for
T when ? is fixed. To solve the problem w.r.t. ? with a fixed T , a common approach is to use the
Frank-Wolfe algorithm [12, 18]. It is a procedure for solving any convex constrained optimization
problem with a convex and continuously differentiable objective function over a compact convex
subset of any vector space. This algorithm can find an approximation of the optimal solution in
O(1/) iterations [19]. A detailed algorithm is given in the supplementary material. In the next
section we discuss the solution of the minimization w.r.t. T with fixed ? for different functional
spaces.
3.2
Choosing H
In the previous subsection we presented our method when considering a general set of transformations
H. In this section we propose several possibilities for the choice of a convex set H. On the one hand,
we propose to define H as a set of linear transformations from ?S to ?T . On the other hand, using
the kernel trick, we propose to consider non-linear transformations. A summary of the approach can
be found in Algorithm 1.
Linear transformations
ds ? dt real matrix L:
A first way to define H is to consider linear transformations induced by a
n
o
s
t
H = T : ? L ? Rd ?d , ?xs ? ?S , T (xs ) = xs T L .
(7)
2
Furthermore, we define R(T ) = kL ? IkF where I is the identity matrix. We choose to bias L
toward I in order to ensure that the examples are not moved too far away from their initial position.
In this case we can rewrite optimization problem (6) as:
1
??
?T
2
2
arg min
kXs L ? ns ?Xt kF +
h?, CiF +
kL ? IkF .
(8)
s ?dt
n
d
max(C)
d
d
d
s
t
s
t
?
L?R
,???
According to Algorithm 1 a part of our procedure requires to solve optimization problem (8) when ?
is fixed. One solution is to use the following closed form for L:
?1
1
?T
1
?T
T
T
L=
X Xs +
I
X ns ?Xt +
I
(9)
ns dt s
ds dt
ns dt s
ds dt
where (?)?1 is the matrix inverse (Moore-Penrose pseudo-inverse when the matrix is singular). In the
previous definition of H, we considered non biased linear transformations. However it is sometimes
desirable to add a bias to the transformation. The equations being very similar in spirit to the non
biased case we refer the interested reader to the supplementary material.
4
Algorithm 1: Joint Learning of L and ?.
input : Xs , Xt source and target examples and ?? , ?T hyper parameters.
output : L, ?.
begin
? and L0 = I
Initialize k = 0, ? 0 ? ?
repeat
Learn ? k+1 solving problem (6) with fixed Lk using a Frank-Wolfe approach.
Learn Lk+1 using Equation (9), (12) or their biased counterparts with fixed ? k+1 .
Set k = k + 1.
until convergence
1
2
3
4
5
6
7
Non-linear transformations In some cases a linear transformation is not sufficient to approximate
the transport map. Hence, we propose to consider non-linear transformations. Let
? be a non-linear
function associated to a kernel function k : ?S ? ?S ? R such that k(xs , xs 0 ) = ?(xs ), ?(xs 0 ) H ,
we can define H for a given set of examples Xs as:
n
o
s
t
H = T : ? L ? Rn ?d ?xs ? ?S , T (xs ) = kXs (xs T )L
(10)
s
s
s
s
s
s
sT
where kXs (x ) is a short-hand for the vector k(x , x1 ) k(x , x2 ) ? ? ? k(x , xns ) where
xs1 , ? ? ? , xsns ? Xs . In this case optimization problem (6) becomes:
?T
1
??
2
2
arg min
h?, CiF +
kkXs (Xs )L ? ns ?Xt kF +
kkXs (?)LkF . (11)
s ?dt
n
d
max(C)
n
d
n
s
t
s
t
?
L?R
,???
where kXs (?) is a short-hand for the vector k(?, xs1 ) ? ? ? k(?, xsns ) = ?(xs1 ) ? ? ? ?(xsns ) .
As in the linear case there is a closed form solution for L when ? is fixed:
?1
1
1
?T
L=
kX (Xs ) + 2 I
ns ?Xt .
(12)
ns dt s
d
ns dt
As in the linear case it might be interesting to use a bias (Presented in the supplementary material).
3.3
Discussion on the quality of the transport map approximation
In this section we propose to discuss some theoretical considerations about our framework and more
precisely on the quality of the learned transformation T . To assess this quality we consider the
Frobenius norm between T and the true transport map, denoted T ? , that we would obtain if we could
solve Monge?s problem. Let B?? be the empirical barycentric mapping of Xs using the probabilistic
coupling ?? learned between Xs and Xt . Similarly let B?0 be the theoretical barycentric mapping
associated with the probabilistic coupling ?0 learned on ?S , ?T the whole distributions and which
corresponds to the solution of Kantorovich?s problem. Using a slight abuse of notations we denote by
B?? (xs ) and B?0 (xs ) the projection of xs ? Xs by these barycentric mappings. Using the triangle
inequality, some standard properties on the square function, the definition of H and [20, Theorem 2],
we have with high probability that (See the supplementary material for a justification):
E
s
x ??S
kT (xs ) ? T ? (xs )kF ? 4
2
X
kT (xs ) ? B?? (xs )k2F + O
xs ?Xs
+4
X
kB?? (xs ) ? B?0 (xs )k2F + 2
xs ?Xs
1
?
ns
E
xs ??S
kB?0 (xs ) ? T ? (xs )kF . (13)
2
From Inequality 13 we assess the quality of the learned transformation T w.r.t. three key quantiP
2
ties. The first quantity, xs ?Xs kT (xs ) ? B?? (xs )kF , is a measure of the difference between the
learned transformation and the empirical barycentric mapping. We minimize it in Problem (6). The
second and third quantities are theoretical and hard to bound because, as far as we know, there
is a lack of theoretical results related to these terms in the literature. Nevertheless, we expect
P
s
s 2
? (x ) ? B?0 (x )kF to decrease uniformly with respect to the number of examples as it
xs ?Xs kB?
corresponds to a measure of how well the empirical barycentric mapping estimates the theoretical
2
one. Similarly, we expect Exs ??S kB?0 (xs ) ? T ? (xs )kF to be small as it characterizes that the
theoretical barycentric mapping is a good approximation of the true transport map. This depends of
course on the expressiveness of the set H considered. We think that this discussion opens up new
theoretical perspectives for OT in Machine Learning but these are beyond the scope of this paper.
5
Table 1: Accuracy on the Moons dataset. Color-code: the darker the result, the better.
Angle 1NN GFK
SA
OT L1L2 OTE
OTLin
T
10
20
30
40
50
60
70
80
90
4
4.1
100.0
93.1
84.0
77.1
61.7
41.2
23.1
20.7
19.4
99.9 100.0 97.9 99.6 100.0 100.0
95.8 93.1 95.0 98.7 100.0 100.0
92.5 84.0 90.6 98.4 100.0 99.8
90.8 74.4 83.7 95.8 100.0 98.3
90.2 73.1 77.8 87.7 87.3 97.8
79.4 72.3 71.0 88.3 86.3 96.4
61.0 72.3 64.5 89.0 77.5 88.0
36.2 72.3 57.3 73.6 58.8 76.9
43.1 34.2 51.0 58.1 51.3 67.9
OTLinB
OTKer
OTKerB
?
T
?
T
?
T
?
100.0
100.0
99.9
98.7
97.6
97.2
94.7
81.0
68.0
100.0
100.0
99.8
98.1
97.5
95.8
88.2
76.6
67.1
100.0
100.0
99.9
98.5
97.5
97.0
94.3
80.7
68.1
100.0
100.0
100.0
99.7
99.1
96.6
80.8
74.0
56.3
100.0
100.0
100.0
99.7
99.2
96.8
81.5
74.1
55.8
100.0
100.0
100.0
99.6
99.1
96.6
82.5
73.9
57.6
100.0
100.0
100.0
99.7
99.1
96.8
83.1
74.2
55.4
Experiments
Domain Adaptation
Datasets We consider two domain adaptation (DA) datasets, namely Moons [21] and OfficeCaltech [22]. The Moons dataset is a binary classification task where the source domain corresponds
to two intertwined moons, each one representing a class. The target domain is built by rotating the
source domain with an angle ranging from 10 to 90 degrees. It leads to 9 different adaptation tasks
of increasing difficulty. The examples are two dimensional and we consider 300 source and target
examples for training and 1000 target examples for testing. The Office-Caltech dataset is a 10 class
image classification task with 4 domains corresponding to images coming from different sources:
amazom (A), dslr (D), webcam (W) and Caltech10 (C). There are 12 adaptation tasks where each
domain is in turn considered as the source or the target (denoted source ? target). To represent
the images we use the deep learning features of size 4096 named decaf6 [23]. During the training
process we consider all the examples from the source domain and half of the examples from the target
domain, the other half being used as the test set.
Methods We consider 6 baselines. The first one is a simple 1-Nearest-Neighbour (1NN) using
the original source examples only. The second and third ones are two widely used DA approaches,
namely Geodesic Flow Kernel (GFK) [22] and Subspace Alignment (SA) [24]. The fourth to sixth
baselines are OT based approaches: the classic OT method (OT), OT with entropy based regularization
(OTE) [3] and OT with `1 `2 regularization (L1L2) [13]. We present the results of our approach with
the linear (OTLin) and kernel (OTKer) versions of T and their biased counterpart (*B). For OT based
methods the idea is to (i) compute the transport map between the source and the target, (ii) project
the source examples and (iii) classify the target examples using a 1NN on the projected source.
Experimental Setup We consider the following experimental setup for all the methods and datasets.
All the results presented in this section are averaged over 10 trials. For each trial we consider three
sets of examples, a labelled source training set denoted Xs , ys , an unlabelled target training set
denoted Xtrain
and a labelled target testing set Xtest
. The model is learned on Xs , ys and Xtrain
t
t
t
test
and evaluated on Xt with a 1NN learned on Xs , ys . All the hyper-parameters are tuned according
to a grid search on the source and target training instances using a circular validation procedure
derived from [21, 25] and described in the supplementary material. For GFK and SA we choose
the dimension of the subspace d ? {3, 6, . . . , 30}, for L1L2 and OTE we set the parameter for
entropy regularization in {10?6 , 10?5 , . . . , 105 }, for L1L2 we choose the class related parameter
? ? {10?5 , 10?4 , . . . , 102 }, for all our methods we choose ?T , ?? ? {10?3 , 10?2 , . . . , 100 }.
The results on the Moons and Office-Caltech datasets are respectively given in Table 1 and 2. A first
important remark is that the coupling ? and the transformation T almost always obtain the same
results. It shows that our method is able to learn a good approximation T of the transport map induced
by ?. In terms of accuracy our approach tends to give the best results. It shows that we are effectively
able to move closer the distributions in a relevant way. For the Moons dataset, the last 6 approaches
(including ours) based on OT obtain similar results until 40 degrees while the other methods fail to
obtain good results at 20 degrees. Beyond 50 degrees, our approaches give significantly better results
than the others. Furthermore they are more stable when the difficulty of the problem increases which
6
Table 2: Accuracy on the Office-Caltech dataset. Color-code: the darker the result, the better.
Task
D?W
D?A
D?C
W ?D
W ?A
W ?C
A?D
A?W
A?C
C?D
C?W
C?A
Mean
1NN GFK SA
89.5
62.5
51.8
99.2
62.5
59.5
65.2
56.8
70.1
75.9
65.2
85.8
93.3
77.2
69.7
99.8
72.4
63.7
75.9
68.0
75.7
79.5
70.7
87.1
95.6
88.5
79.0
99.6
79.2
55.0
83.8
74.6
79.2
85.0
74.4
89.3
OT L1L2 OTE
77.0
70.8
68.1
74.1
67.6
63.1
64.6
66.8
70.4
66.0
59.2
75.2
95.7
74.9
67.8
94.4
71.3
67.8
70.1
67.2
74.1
69.8
63.8
76.6
95.7
74.8
68.0
94.4
71.3
67.8
70.5
67.3
74.3
70.2
63.8
76.7
OTLin
OTLinB
OTKer
OTKerB
T
?
T
?
T
?
T
?
97.3
85.7
77.2
99.4
81.5
75.9
80.6
74.6
81.8
87.1
78.3
89.9
97.3
85.7
77.2
99.4
81.5
75.9
80.6
74.6
81.8
87.1
78.3
89.9
97.3
85.8
77.4
99.8
81.4
75.4
80.4
74.4
81.6
87.2
78.5
89.7
97.3
85.8
77.4
99.8
81.4
75.4
80.5
74.4
81.6
87.2
78.5
89.7
98.4
89.9
69.1
97.2
78.5
72.7
65.6
66.4
84.4
70.1
80.0
82.4
98.5
89.9
69.2
97.2
78.3
72.7
65.5
64.8
84.4
70.0
80.4
82.2
98.5
89.5
69.3
96.9
78.5
65.1
71.9
70.0
84.5
78.6
73.5
83.6
98.5
89.5
69.3
96.9
78.8
63.3
71.5
68.9
84.5
78.6
73.4
83.5
70.3 77.8 81.9 68.6 74.5 74.6 84.1 84.1 84.1 84.1 79.6 79.4 80.0 79.7
can be interpreted as a benefit from our regularization. In the supplementary material we propose
an illustration of the transformation learned by our approach. For Office-Caltech, our methods are
significantly better than other approaches which illustrates the potential of our method for difficult
tasks. To conclude, forcing OT to simultaneously learn coupling and transformation seems beneficial.
4.2
Seamless copy in images with gradient adaptation
We propose here a direct application of our mapping estimation in the context of image editing.
While several papers using OT are focusing on color adaptation [12, 26], we explore here a new
variant in the domain of image editing: the seamless editing or cloning in images. In this context, one
may desire to import a region from a given source image to a target image. As a direct copy of the
region leads to inaccurate results in the final image nearby the boundaries of the copied selection, a
very popular method, proposed by P?erez and co-workers [27], allows to seamlessly blend the target
image and the selection. This technique, coined as Poisson Image Editing, operates in the gradient
domain of the image. Hence, the gradients of the selection operate as a guidance field for an image
reconstruction based on membrane interpolation with appropriate boundary conditions extracted from
the target image (See the supplementary material for more details).
Though appealing, this technique is prone to errors due local contrast change or false colors resulting
from the integration. While some solutions combining both gradient and color domains exist [28],
this editing technique usually requires the source and target images to have similar colors and contrast.
Here, we propose to enhance the genericity of this technique by forcing the gradient distribution from
the source image to follow the gradient distribution in the target image. As a result, the seamless
cloning not only blends smoothly the copied region in the target domain, but also constraints the color
dynamics to that of the target image. Hence, a part of the style of the target image is preserved. We
start by learning a transfer function Ts?t : R6 ? R6 with our method, where 6 denotes the vertical
and horizontal components of gradient per color, and we then directly solve the same system as [27].
When dealing with images, the number of source and target gradients are largely exceeding tens of
thousands and it is mandatory to consider methods that scale appropriately. As such, our technique can
readily learn the transfer function Ts?t over a limited set of gradients and generalizes appropriately
to unseen gradients. Three illustrations of this method are proposed in a context of face swapping
in Figure 2. As one can observe, the original method of Poisson image editing [27] (3rd column)
tends to preserve the color dynamics of the original image and fails in copying the style of the target
image. Our method was tested with a linear and kernel version of Ts?t , that was learned with only
500 gradients sampled randomly from both sources (?T = 10?2 , ?T = 103 for respectively the
linear and kernel versions, and ?? = 10?7 for both cases). As a general qualitative comment, one
can observe that the kernel version of Ts?t is better at preserving the dynamics of the gradient, while
the linear version tends to flatten the colors. In this low-dimensional space, this illustrates the need of
a non-linear transformation. Regarding the computational time, the gradient adaptation is of the same
7
Figure 2: Illustrations of seamless copies with gradient adaptation. Each row is composed of the
source image, the corresponding selection zone ? described as a binary mask, and the target image.
We compare here the two linear (4th column) and kernel (5th column) versions of the map Ts?t with
the original method of [27] (2nd column) (best viewed in color).
order of magnitude as the Poisson equation solving, and each example is computed in less than 30s
on a standard personal laptop. In the supplementary material we give other examples of the method.
5
Conclusion
In this paper we proposed a jointly convex approach to learn both the coupling ? and a transformation
T approximating the transport map given by ?. It allowed us to apply a learned transport to a set
of out-of-samples examples not seen during the learning process. Furthermore, jointly learning
the coupling and the transformation allowed us to regularize the transport by enforcing a certain
smoothness on the transport map. We also proposed several possibilities to choose H the set of
possible transformations. We presented some theoretical considerations on the generalization ability
of the learned transformation T . Hence we discussed that under the assumption that the barycentric
mapping generalizes well and is a good estimate of the true transformation, then T learned with our
method should be a good approximation of the true transformation. We have shown that our approach
is efficient in practice on two different tasks: domain adaptation and image editing.
The framework presented in this paper opens the door to several perspectives. First, from a theoretical
standpoint the bound proposed raises some questions on the generalization ability of the barycentric
mapping and on the estimation of the quality of the true barycentric mapping with respect to the target
transformation. On a more practical side, note that in recent years regularized OT has encountered a
growing interest and several methods have been proposed to control the behaviour of the transport.
As long as these regularization terms are convex, one could imagine using them in our framework.
Another perspective could be to use our framework in a mini-batch setting where instead of learning
from the whole dataset we can estimate a single function T from several couplings ? optimized on
different splits of the examples. As a last example we believe that our framework could allow the use
of the notion of OT in deep architectures as, contrary to the coupling ?, the function T can be used
on out-of-samples examples.
Acknowledgments
This work was supported in part by the french ANR project LIVES ANR-15-CE23-0026-03.
8
References
[1] C. Villani. Optimal transport: old and new. Grund. der mathematischen Wissenschaften. Springer, 2009.
[2] G. Canas and L. Rosasco. Learning probability measures with respect to optimal transport metrics. In
NIPS. 2012.
[3] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013.
[4] J. Solomon, R. Rustamov, G. Leonidas, and A. Butscher. Wasserstein propagation for semi-supervised
learning. In ICML, 2014.
[5] C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. Poggio. Learning with a Wasserstein loss. In NIPS.
2015.
[6] M. Cuturi and A. Doucet. Fast computation of Wasserstein barycenters. In ICML, 2014.
[7] J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr?e. Iterative Bregman projections for regularized
transportation problems. SISC, 2015.
[8] V. Seguy and M. Cuturi. Principal geodesic analysis for probability measures under the optimal transport
metric. In NIPS. 2015.
[9] J. Mueller and T. Jaakkola. Principal differences analysis: Interpretable characterization of differences
between distributions. In NIPS. 2015.
[10] R. McCann. A convexity principle for interacting gases. Advances in Mathematics, 128(1), 1997.
[11] S. Reich. A nonparametric ensemble transform method for bayesian inference. SISC, 2013.
[12] S. Ferradans, N. Papadakis, G. Peyr?e, and J.-F. Aujol. Regularized discrete optimal transport. SIIMS, 2014.
[13] N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In ECML
PKDD, 2014.
[14] J.-D. Benamou, B. D Froese, and A. M Oberman. Numerical solution of the optimal transportation problem
using the Monge?Amp`ere equation. Journal of Computational Physics, 260, 2014.
[15] M. Cuturi and G. Peyr?e. A smoothed dual approach for variational Wasserstein problems. SIIMS, 2016.
[16] L. Kantorovich. On the translocation of masses. C.R. (Doklady) Acad. Sci. URSS (N.S.), 37, 1942.
[17] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal
of Optimization Theory and Applications, 109(3), 2001.
[18] M. Frank and P. Wolfe. An algorithm for quadratic programming. NRL, 3(1-2), 1956.
[19] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, 2013.
[20] M. Perrot and A. Habrard. Regressive virtual metric learning. In NIPS, 2015.
[21] L. Bruzzone and M. Marconcini. Domain adaptation problems: A DASVM classification technique and a
circular validation strategy. IEEE PAMI, 32(5), 2010.
[22] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In
CVPR, 2012.
[23] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional
activation feature for generic visual recognition. In ICML, 2014.
[24] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using
subspace alignment. In ICCV, 2013.
[25] E. Zhong, W. Fan, Q. Yang, O. Verscheure, and J. Ren. Cross validation framework to choose amongst
models and datasets for transfer learning. In ECML PKDD, 2010.
[26] J. Solomon, F. De Goes, G. Peyr?e, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional
Wasserstein distances. ACM Trans. on Graphics, 34(4), 2015.
[27] P. P?erez, M. Gangnet, and A. Blake. Poisson image editing. ACM Trans. on Graphics, 22(3), 2003.
[28] F. Deng, S. J. Kim, Y.-W. Tai, and M. Brown. ACCV, chapter Color-Aware Regularization for Gradient
Domain Image Manipulation. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
9
| 6312 |@word mild:1 trial:2 version:8 norm:1 seems:1 nd:1 villani:1 open:3 seek:1 propagate:1 xtest:1 tr:1 initial:1 tuned:1 ours:1 amp:1 comparing:1 nt:4 activation:1 yet:1 must:1 written:1 import:1 readily:1 numerical:1 oberman:1 interpretable:1 half:2 prohibitive:1 realizing:1 short:3 regressive:1 provides:1 recompute:1 node:1 location:1 characterization:1 zhang:2 direct:2 qualitative:1 azur:1 consists:1 mccann:1 x0:1 theoretically:1 mask:1 pkdd:2 growing:1 multi:1 sud:1 bif:1 verscheure:1 ote:5 lyon:2 xti:1 considering:3 increasing:1 becomes:2 spain:1 estimating:1 notation:3 underlying:3 begin:1 medium:1 mass:2 laptop:1 project:2 interpreted:2 transformation:34 pseudo:1 tie:1 universit:2 k2:1 doklady:1 grauman:1 control:3 local:1 tends:3 acad:1 despite:1 kantorovitch:2 interpolation:3 abuse:2 pami:1 might:1 umr:4 co:1 micha:1 limited:1 bi:1 averaged:1 practical:1 acknowledgment:1 testing:2 practice:2 block:4 procedure:3 empirical:8 nenna:1 significantly:2 projection:4 word:1 flatten:1 regular:1 selection:4 operator:1 context:5 optimize:1 map:25 transportation:3 shi:1 go:1 attention:1 convex:15 canas:1 regularize:1 classic:2 population:1 notion:1 coordinate:4 justification:1 target:25 controlling:1 imagine:1 programming:1 aujol:1 associate:1 wolfe:4 trick:1 ikf:2 particularly:1 recognition:1 bottom:1 solved:1 thousand:1 revisiting:1 region:3 trade:1 decrease:1 csi:2 convexity:1 grund:1 cuturi:6 dynamic:3 geodesic:3 personal:1 raise:1 solving:3 rewrite:1 irisa:1 triangle:1 joint:3 chapter:3 univ:5 fast:1 rds:1 hyper:3 choosing:1 supplementary:9 solve:8 widely:1 cvpr:1 anr:2 ability:3 unseen:2 tuytelaars:1 think:2 jointly:10 transform:1 final:1 differentiable:1 propose:13 reconstruction:1 coming:1 fr:4 adaptation:16 relevant:1 combining:1 achieve:1 flamary:3 frobenius:2 inducing:1 dirac:3 moved:1 convergence:4 regularity:1 darrell:1 coupling:20 derive:2 gong:1 xns:1 nearest:1 received:1 sa:4 c:2 come:1 drawback:2 hull:1 kb:4 material:8 virtual:1 bin:1 ujm:2 behaviour:1 benamou:2 generalization:4 preliminary:1 strictly:1 considered:3 blake:1 guibas:1 mapping:19 scope:1 major:1 estimation:3 label:2 bridge:1 ere:1 weighted:1 hoffman:1 minimization:2 always:3 pnt:1 reaching:1 pn:1 zhong:1 jaakkola:1 office:4 encode:1 l0:1 focus:1 derived:1 cloning:2 seamlessly:1 contrast:2 baseline:2 kim:1 whence:1 inference:2 mueller:1 el:1 cnrs:4 nn:5 inaccurate:1 interested:2 arg:6 dual:2 classification:3 denoted:4 oca:1 development:1 constrained:1 integration:1 initialize:1 tzeng:1 field:1 once:1 aware:1 sampling:1 unsupervised:3 k2f:2 icml:4 simplex:1 others:1 randomly:1 neighbour:1 composed:1 simultaneously:1 divergence:2 preserve:1 xtj:1 geometry:1 lebesgue:1 interest:3 possibility:2 circular:2 evaluation:1 alignment:2 introduces:1 swapping:1 hubert:2 kt:4 bregman:1 closer:1 worker:1 poggio:1 old:1 rotating:1 bretagne:1 guidance:1 theoretical:10 instance:2 classify:1 column:4 tuia:1 cost:5 subset:1 habrard:4 uniform:1 usefulness:2 peyr:4 too:1 graphic:2 st:3 thanks:1 accessible:1 seamless:4 probabilistic:8 off:1 compliant:1 physic:1 butscher:2 michael:1 enhance:1 continuously:1 squared:1 again:1 solomon:2 rosasco:1 choose:6 style:2 account:1 potential:2 de:2 depends:3 leonidas:1 lot:1 lab:2 closed:2 linked:1 characterizes:1 start:1 jia:1 contribution:1 ass:2 square:1 partite:1 ni:1 minimize:1 accuracy:3 moon:6 efficiently:2 largely:1 ensemble:1 convolutional:2 generalize:1 identification:1 bayesian:2 ren:1 suffers:1 whenever:1 dslr:1 rdt:1 definition:3 sixth:1 associated:3 psi:1 proof:1 gain:2 sampled:1 dataset:8 popular:1 color:14 subsection:1 organized:1 actually:1 focusing:1 dt:11 supervised:2 follow:1 editing:11 formulation:8 evaluated:1 though:1 furthermore:3 until:2 d:4 hand:7 horizontal:1 transport:39 nonlinear:1 lack:1 propagation:1 french:1 infimum:2 quality:5 believe:1 usage:1 brown:1 true:5 counterpart:2 regularization:10 hence:4 alternating:1 moore:1 deal:1 during:3 generalized:1 translocation:1 unice:1 geometrical:1 image:33 variational:2 consideration:2 ranging:1 common:1 functional:2 sebban:1 empirically:1 belong:1 interpretation:1 slight:2 discussed:2 marginals:1 mathematischen:1 refer:1 smoothness:1 rd:2 grid:1 mathematics:1 similarly:2 erez:2 dot:1 stable:1 reich:1 sinkhorn:1 etc:1 add:1 jaggi:1 multivariate:1 recent:3 perspective:3 inf:1 forcing:2 mandatory:2 pns:1 scenario:1 certain:1 manipulation:1 inequality:2 binary:2 life:1 der:1 caltech:4 captured:1 seen:5 wasserstein:6 preserving:1 deng:1 converge:1 fernando:1 semi:2 ii:4 smoother:1 sound:1 desirable:1 smooth:1 unlabelled:1 cross:1 long:1 curien:2 gfk:4 y:3 papadakis:1 variant:1 regression:1 xsi:1 optimisation:1 metric:5 poisson:4 histogram:1 grounded:1 iteration:1 kernel:9 sometimes:1 represent:1 preserved:1 background:1 singular:1 source:22 standpoint:2 appropriately:2 ot:27 rest:2 biased:4 operate:1 sisc:2 comment:1 induced:3 contrary:1 flow:2 spirit:1 call:1 yang:1 door:2 iii:1 split:1 ferradans:1 variety:1 nrl:1 architecture:1 lightspeed:1 idea:2 regarding:1 ce23:1 pca:1 carlier:1 cif:4 remark:1 deep:3 useful:2 detailed:1 xtrain:2 nonparametric:1 ten:1 exist:1 estimated:1 per:1 discrete:6 intertwined:1 key:1 nevertheless:1 drawn:2 graph:2 relaxation:5 year:2 inverse:2 angle:2 fourth:1 named:1 almost:1 reader:1 radon:1 bound:4 guaranteed:2 copied:2 fan:1 quadratic:1 encountered:1 precisely:1 constraint:1 x2:1 bcd:1 nearby:1 emi:1 min:6 separable:1 according:2 membrane:1 across:1 beneficial:1 pti:1 ur:1 appealing:2 iccv:1 fulfilling:1 equation:4 tai:1 discus:2 turn:1 fail:1 know:3 available:1 generalizes:2 apply:1 observe:2 away:1 appropriate:1 generic:1 batch:1 existence:4 original:5 top:1 denotes:1 ensure:2 saint:2 etienne:4 coined:1 approximating:1 webcam:1 objective:1 perrot:3 added:1 quantity:2 move:1 blend:2 parametric:1 question:1 barycenter:1 strategy:1 sha:1 kantorovich:4 gradient:15 amongst:1 subspace:4 distance:6 mapped:1 sci:1 berlin:2 nondifferentiable:1 considers:1 tseng:1 toward:1 enforcing:1 assuming:1 siims:2 code:2 copying:1 illustration:5 mini:1 setup:2 difficult:1 frank:4 perform:1 vertical:1 observation:1 datasets:5 finite:1 descent:4 t:5 gas:1 ecml:2 accv:1 regularizes:1 ubs:1 barycentric:13 rn:1 interacting:1 smoothed:1 clown:1 community:1 expressiveness:1 namely:2 required:1 kl:2 optimized:2 product1:1 learned:12 established:1 barcelona:1 nip:7 trans:2 address:2 able:3 beyond:3 usually:3 built:1 max:3 including:1 critical:1 difficulty:2 regularized:8 representing:1 scheme:1 lk:2 seguy:1 nice:1 literature:1 kf:8 loss:3 lkf:1 expect:2 araya:1 interesting:1 proven:1 monge:6 validation:3 degree:4 sufficient:1 amaury:2 xp:1 principle:1 pi:4 row:1 prone:1 course:1 summary:1 supported:2 last:3 free:2 repeat:1 copy:3 enjoys:2 side:2 allow:2 bias:3 xs1:3 face:1 sparse:1 benefit:2 boundary:2 dimension:1 forward:1 author:1 projected:1 nguyen:1 far:3 approximate:1 compact:1 dealing:1 doucet:1 conclude:2 xi:2 alternatively:1 continuous:1 search:1 iterative:1 table:3 additionally:1 learn:9 nature:2 transfer:4 nicolas:1 heidelberg:2 du:1 domain:22 diag:1 da:2 wissenschaften:1 main:1 whole:2 courty:3 allowed:2 rustamov:1 x1:1 darker:2 n:15 fails:1 position:1 explicit:1 exceeding:1 r6:2 third:3 learns:1 donahue:1 theorem:5 xt:15 specific:3 mobahi:1 x:55 admits:1 evidence:1 closeness:1 exists:1 frogner:1 false:1 effectively:1 decaf:1 magnitude:1 illustrates:2 push:1 genericity:1 kx:2 gap:1 entropy:2 smoothly:1 remi:1 explore:1 penrose:1 conveniently:1 lagrange:1 expressed:3 desire:1 vinyals:1 visual:2 springer:2 corresponds:7 extracted:1 acm:2 viewed:2 identity:1 consequently:1 labelled:2 analysing:1 hard:2 change:1 uniformly:1 operates:1 principal:2 called:1 kxs:4 experimental:3 zone:1 relevance:1 tested:1 ex:1 |
5,873 | 6,313 | Bayesian Intermittent Demand Forecasting for Large
Inventories
Matthias Seeger, David Salinas, Valentin Flunkert
Amazon Development Center Germany
Krausenstrasse 38
10115 Berlin
[email protected], [email protected], [email protected]
Abstract
We present a scalable and robust Bayesian method for demand forecasting in the
context of a large e-commerce platform, paying special attention to intermittent
and bursty target statistics. Inference is approximated by the Newton-Raphson
algorithm, reduced to linear-time Kalman smoothing, which allows us to operate on
several orders of magnitude larger problems than previous related work. In a study
on large real-world sales datasets, our method outperforms competing approaches
on fast and medium moving items.
1
Introduction
Demand forecasting plays a central role in supply chain management, driving automated ordering,
in-stock management, and facilities planning. Classical forecasting methods, such as exponential
smoothing [10] or ARIMA models [5], produce Gaussian predictive distributions. While sufficient
for inventories of several thousand fast-selling items, Gaussian assumptions are grossly violated
for the extremely large catalogues maintained by e-commerce platforms. There, demand is highly
intermittent and bursty: long runs of zeros, with islands of high counts. Decision making requires
quantiles of predictive distributions [14], whose accuracy suffer under erroneous assumptions.
In this work, we detail a novel methodology for intermittent demand forecasting which operates in
the industrial environment of a very large e-commerce platform. Implemented in Apache Spark
[16], our method is used to process many hundreds of thousands of items and several hundreds of
millions of item-days. Key requirements are automated parameter learning (no expert interventions),
scalability and a high degree of operational robustness. Our system produces forecasts both for short
(one to three weeks) and longer lead times (up to several months), the latter require feature maps
depending on holidays, sales days, promotions, and price changes. Previous work on intermittent
demand forecasting in Statistics is surveyed in [15]: none of these address longer lead times. On a
modelling level, our proposal is related to [6], yet several novelties are essential for operating at the
industrial scale we target here. This paper makes the following contributions:
? A combination of generalized linear models and time series smoothing. The former enables
medium and longer term forecasts, the latter provides temporal continuity and reasonable
distributions over time. Compared to [6], we provide empirical evidence for the usefulness
of this combination.
? A novel algorithm for maximum likelihood parameter learning in state space models with
non-Gaussian likelihood, using approximate Bayesian inference. While there is substantial
related prior work, our proposal stands out in robustness and scalability. We show how
approximate inference is solved by the Newton-Raphson algorithm, fully reduced to Kalman
smoothing once per iteration. This reduction scales linearly (a vanilla implementation
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
would scale cubically). While previously used in Statistics [7, Sect. 10.7], this reduction
is not widely known in Machine Learning. If L-BFGS is used instead (as proposed in [6]),
approximate inference fails in our real-world use cases.
? A multi-stage likelihood, taylored to intermittent and bursty demand data (extension of
[15]), and a novel transfer function for Poisson likelihood, which robustifies the Laplace
approximation for bursty data. We demonstrate that our approach would not work without
these novelties.
The structure of this paper is as follows. In Section 2, we introduce intermittent demand likelihood
function as well as a generalized linear model baseline. Our novel latent state forecasting methodology
is detailed in Section 3. We relate our approach to prior work in Section 4. In Section 5, we evaluate
our methods both on publicly available data and on a large dataset of real-world demand in the context
of e-commerce, comparing against state of the art intermittent forecasting methods.
2
Generalized Linear Models
In this section, we introduce a likelihood function for intermittent demand data, along with a
generalized linear model as baseline. Denote demand by zit ? N (i for item, t for day). Our goal is
to predict distributions of zit aggregates in the future. We do this by fitting a probabilistic model to
maximize the likelihood of training demand data, then drawing sample paths from the fitted model,
which represent forecast distributions. In the sequel, we fix an item i and write zt instead of zit .
A model is defined by a likelihood P (zt |yt ) and a latent function yt . An example is the Poisson:
Ppoi (z|y) =
1
?(y)z e??(y) ,
z!
z ? N,
(1)
where the rate ?(y) depends on y through a transfer function. Demand data over large inventories is
both intermittent (many zt = 0) and bursty (occasional large zt ), and is not well represented by a
Poisson. A better choice is the multi-stage likelihood, generalizing a proposal in [15]. This likelihood
decomposes into K = 3 stages, each with its latent function y (k) . In stage k = 0, we emit z = 0 with
probability1 ?(y (0) ). Otherwise, we transfer to stage k = 1, where z = 1 is emitted with probability
?(y (1) ). Finally, if z ? 2, then stage k = 2 draws z ? 2 from the Poisson (1) with rate ?(y (2) ).
If the latent function yt (or functions yt (k) ) is linear, yt = x>
t w, we have a generalized linear model
(GLM) [11]. Features in xt include kernels anchored at holidays (Christmas, Halloween), seasonality
indicators (DayOfWeek, MonthOfYear), promotion or price change indicators. The weights w are
learned by maximizing the training data likelihood. For the multi-stage likelihood, this amounts to
separate instances of binary classification at stages 0, 1, and Poisson regression at stage 2. Generalized
linear forecasters work reasonably well, but have some important drawbacks. They lack temporal
continuity: for short term predictions, even simple smoothers can outperform a tuned GLM. More
important, a GLM predicts overly narrow forecast distributions, whose widths do not grow over time,
and it neglects temporal correlations. Both drawbacks are alleviated in Gaussian linear time series
models, such as exponential smoothing (ES) [10]. A major challenge is to combine this technology
with general likelihood functions (Poisson, multi-stage) to enable intermittent demand forecasting.
3
Latent State Forecasting
In this section, we develop latent state forecasting for intermittent demand, combining GLMs, general
likelihoods, and exponential smoothing time series models. We begin with a single likelihood
P (zt |yt ), for example the Poisson (1), then consider a multi-stage extension. The latent process is
yt = a>
t lt?1 + bt ,
bt = w> xt ,
lt = F lt?1 + g t ?t ,
?t ? N (0, 1).
(2)
Here, bt is the GLM deterministic linear function, lt is a latent state. This innovation state space
model (ISSM) [10] is defined by at , g t and F , as well as the prior l0 ? P (l0 ). Note that ISSMs are
characterized by a single Gaussian innovation variable ?t per time step. In our experiments here, we
1
Here, ?(u) := (1 + e?u )?1 is the logistic sigmoid.
2
employ a simple2 instance:
yt = lt?1 + bt ,
lt = lt?1 + ??t ,
l0 ? N (?0 , ?02 ),
meaning that F = [1], at = [1], g t = [?], and the latent state contains a level component only. The
free parameters are w (weights), ? > 0, and ?0 , ?0 > 0 of P (l0 ), collected in the vector ?.
3.1
Training. Prediction. Multiple Stages
We would like to learn ? by maximizing the likelihood of data [zt ]t=1,...,T . Compared to the
GLM case, this is harder to do, since latent (unobserved) variables s = [?1 , . . . , ?T ?1 , l0 > ]>
have to be integrated out. If our likelihood P (zt |yt ) was Gaussian, this marginalization could be
computed analytically via Kalman smoothing [10]. With a non-Gaussian likelihood, the problem is
analytically intractable, yet is amenable
to the LaplaceRapproximation
[4, Sect. 4.4]. The exact log
R
Q
likelihood is log P (z|?) = log P (z, s|?) ds = log
t P (zt |yt )P (s) ds, where y = y(s) is
the affine mapping given by (2). We proceed in two steps. First, we find the mode of the posterior:
? = argmax log P (z, s|?), the inner optimization problem. Second, we replace ? log P (z, s|?)
s
by its quadratic Taylor approximation
f (s; ?) at the mode. The criterion to replace the negative
R
log likelihood is ?(?) := ? log e?f (s;? ) ds. More precisely, denote ?t (yt ) := ? log P (zt |yt ),
? = y(?
? is the posterior mode. The log-concavity of the likelihood implies
and let y
s ), where s
that ?t (yt ) is convex, and ?00t (yt ) > 0. The quadratic Taylor approximation to ?t (yt ) at y?t is
??t (yt ) := ? log N (?
zt |yt , ?t2 ), where ?t2 = 1/?00t (?
yt ) and z?t = y?t ? ?t2 ?0t (?
yt ). Now, Laplace?s
approximation to ? log P (z|?) can be written as
Z Y
X
?(?) = ? log
N (?
zt |yt , ?t2 )P (s) ds +
?t (?
yt ) ? ??t (?
yt ) , y = y(s; ?).
(3)
t
t
For log-concave3 P (zt |yt ), the inner optimization is a convex problem. We use the Newton-Raphson
algorithm to solve it. This algorithm iterates between fitting the current criterion by its local second
order approximation and minimizing the quadratic surrogate. For the former step, we compute yt
values by a forward pass (2), then replace the potentials P (zt |yt ) by N (?
zt |yt , ?t2 ), where the values
z?t , ?t2 are determined by the second order fit (as above, but y?t ? yt ). The latter step amounts to
computing the posterior mean (equal to the mode) E[s] of the resulting Gaussian-linear model. This
inference problem is solved by Kalman smoothing.4
? , but also the computation of ?? ?, is fully reduced to Kalman smoothing.
Not only finding the mode s
? by the most effective optimization algorithm there is. In
This point is crucial. We can find s
general, each Newton step requires the O(T 3 ) inversion of a Hessian matrix. We reduce it to Kalman
smoothing, a robust algorithm with O(T ) scaling. As shown in Section 4, Newton-Raphson is
? in a reasonable time.
essential here: other commonly used optimizers fail to find s
Prediction samples are obtained as follows. Denote observed demand by [z1 , z2 , . . . , zT ], unobserved
demand in the prediction range by [zT +1 , zT +2 , . . . ]. We run Newton-Raphson one more time to
obtain the Gaussian approximation to the posterior P (lT |z 1:T ) over the final state. For each sample
path [zT +t ], we draw lT ? P (lT |z 1:T ), ?T +t ? N (0, 1), compute [yT +t ] by a forward pass, and
zT +t ? P (zT +t |yT +t ). Drawing prediction samples is not more expensive than from a GLM.
Finally, we generalize latent state forecasting to the multi-stage likelihood. As for the GLM, we learn
parameters ? (k) separately for each stage k. Stages k = 0, 1 are binary classification, while stage
k = 2 is count regression. Say that a day t is active at stage k if zt ? k. Recall that with GLMs, we
(k)
simply drop non-active days. Here, we use ISSMs for [yt ] on the full range t = 1, . . . , T , but all
(k)
non-active yt are considered unobserved: no likelihood potential is associated with t. Both Kalman
smoothing and mode finding (Laplace approximation) are adapted to missing observations, which
presents no difficulties (see also Section 5.1).
2
3
4
More advanced variants include damping, linear trend, and seasonality factors [10].
Unless otherwise said, all likelihoods in this paper are log-concave.
We use a numerically robust implementation of Kalman smoothing, detailed in [10, Sect. 12].
3
3.2
Some Details
In this section, we sketch technical details, most of which are novel contributions. As demonstrated
in our experiments, these details are essential for the whole approach to work robustly at the intended
scale on our difficult real-world data. Full details are given in a supplemental report.
We use L-BFGS for the outer optimization of ?(?), encoding the constrained parameters: ? =
?m + (?M ? ?m )?(?1 ); 0 < ?m < ?M ; ?0 = log(1 + e?2 ) > 0. We add a quadratic regularizer
P
? 2
?
j (?j /2)(?j ? ?j ) to the criterion, where ?j , ?j are shared across all items. Finally, recall that
with the multi-stage likelihood, day t is unobserved at stage k > 1 if zt < k. If for some item, there
?
are less than 7 observed days in a stage, we skip training and fall back to fixed parameters ?.
? . This high-dimensional inner
Every single evaluation of ?(?) requires finding the posterior mode s
? = argmin F (s; ?) := ? log P (z|s) ?
optimizationP
has to converge robustly in few iterations: s
log P (s) = t ?t (yt ) ? log P (s). The use of Newton-Raphson and its reduction to linear-time
Kalman smoothing was noted above. The algorithm is extended by a line search procedure as well as
a heuristic to pick a starting point s0 (see supplemental report).
We have to compute the gradient ?? ?(?), where the criterion is given by (3). The main difficulty
?, s
? ), where y
? = y(?
?=s
? (?). Since s
? is computed by
here are indirect dependencies: ?(?, y
s ; ?), s
an iterative algorithm, commonly used automated differentiation tools do not sensibly apply here.
? is defined by
Maybe the most difficult indirect term is (?s? ?)> (??
s /??j ), where ?j ? ?. First, s
?s? F = 0. Taking the derivative w.r.t. ?j on both sides, we obtain (??
s /??j ) = ?(?s? ,?s F )?1 ?s? ,?j F ,
so we are looking at ?(?s? ,?j F )> (?s? ,?s F )?1 (?s? ?). It is of course out of the question to compute and
invert ?s? ,?s F . But (?s? ,?s F )?1 (?s? ?) corresponds to the posterior mean for an ISSM with Gaussian
likelihood, which depends on ?s? ?. This means that the indirect gradient part costs one more run
of Kalman smoothing, independent of the number of parameters ?j . Note that the same reasoning
underlies our reduction of Newton-Raphson to Kalman smoothing.
A final novel contribution is essential for making the Laplace approximation work on real-world bursty
demand data. Recall the transfer function ?(y) for the Poisson likelihood (1) at the highest stage
k = 2. As shown in Section 4, the exponential choice ? = ey fails for all but short term forecasts.
With a GLM, the logistic transfer ?(y) = g(y) works well, where g(u) := log(1 + eu ). It behaves
like ey for y < 0, but grows linearly for positive y. However, it exhibits grave problems for latent
state forecasting. Denote ?(y) := ? log P (z|y), where P (z|y) is the Poisson with logistic transfer.
? = (? ? z?)/(2? 2 ),
Recall Laplace?s approximation from Section 3.1: ?(?) is fit by a quadratic ?(?)
2
00
2 0
where ? = 1/? (y), z? = y ? ? ? (y). For large y and z = 0, these two terms scale as ey , while
for z > 0 they grow polynomially. In real-world data, we regularly exhibit sizable counts (say, a
few zt > 25, driving up yt ), followed by a single zt = 0. At this point, huge values (?
zt , ?t2 ) arise,
causing cancellation errors in ?(?), and the outer optimization terminates prematurely.
The root cause for these issues lies with the transfer function: g(y) ? y for large y, its curvature
behaves as e?y . Our remedy is to propose the novel twice logistic transfer function: ?(y) =
g(y(1 + ?g(y)), where ? > 0. If ?? (y) = ? log P (z|y) with the new transfer function, then ?? (y)
behaves similar to ?(y) for small or negative y, but crucially (?? )00 (y) ? 2? for large y and any
z ? N. This means that Laplace approximation terms are O(1/?). Setting ? = 0.01 resolves all
problems described above. Importantly, the resulting Poisson likelihood is log-concave for any
? ? 0. We conjecture that similar problems may arise with other ?local? variational or expectation
propagation inference approximations as well. The twice logistic transfer function should therefore
be of wider applicability.
4
Related Work
Our work has precursors both in Statistics and Machine Learning. Maximum likelihood learning for
exponential smoothing models is developed in [10]. These methods are limited to Gaussian likelihood,
approximate Bayesian inference is not used. Starting from Croston?s method [10, Sect. 16.2], there
is a sizable literature on intermittent demand forecasting, as reviewed in [15]. The best-performing
method in [15] uses negative binomial likelihood and a damped dynamic, parameters are learned
by maximum likelihood. There is no latent (random) state, and neither non-Gaussian inference nor
Kalman smoothing are required. It does not allow for a combination with GLMs.
4
We employ approximate Bayesian inference in a linear dynamical system, for which there is a lot
of prior work in Machine Learning [3, 1, 2]. While Laplace?s technique is the most frequently used
deterministic approximation in Statistics, both in publications and in automated inference systems
[13], other techniques such as expectation propagation are applicable to models of interest here
[12, 8]. The robustness and predictable running time of Laplace?s approximation are key in our
application, where inference is driving parameter learning, running in parallel over hundreds of
thousands of items. Expectation propagation is not guaranteed to converge, and Markov chain Monte
Carlo methods even lack automated convergence tests.
The work most closely related to ours is [6]. They target intermittent demand forecasting, using a
Laplace approximation for maximum likelihood learning, allow for a combination with GLMs, and
go beyond our work transferring information between items by way of a hierarchical prior distribution.
Their work is evaluated on small datasets and short term scenarios only. In contrast, our system runs
robustly on many hundreds of thousands of items and many millions of item-days, a three orders
of magnitude larger scale than what they report. They do not explore the value of a feature-based
deterministic part, which on our real-world data is essential for medium term forecasts. We find that a
number of choices in [6] are limiting when it comes to robustness and scalability. First, they choose a
likelihood which is not log-concave for two reasons: they use a negative binomial distribution instead
of a Poisson, and they use zero-inflation instead of a multi-stage setup.5 This means their inner
optimization problem is non-convex, jeopardizing robustness and efficiency of the nested learning
process. Moreover, in our multi-stage setup, the conditional probability of zt = 0 versus zt > 0 is
represented exactly, while zero-inflation caters for a time-independent zero probability only.
Next, they use an exponential transfer function
? = ey for the negative binomial rate, while
10
we propose the novel twice logistic function
10
newton
10
lbfgs
(Section 3.2). Experiments with the exponen10
tial choice on our data resulted in total failure,
10
10
at least beyond short term forecasts. Its huge
10
curvature for large y results in extremely large
10
10
and instable predictions around holidays. In fact,
10
the exponential function causes rapid growth of
10
10
predictions even without a linear function ex10
tension, unless the random process is strongly
10
10
10
10
damped. Finally, they use a standard L-BFGS
time [ms]
solver for their inner problem, evaluating the criterion using additional sparse matrix software.
In contrast, we enable Newton-Raphson by re- Figure 1: Comparison Newton-Raphson vs. Lducing it to Kalman smoothing. In Figure 1, BFGS for inner optimization. Sampled at first
we evaluate the usefulness of L-BFGS for mode evaluation of ?(?). Shown are median (p10, p90)
finding in our setup.6 L-BFGS clearly fails to over ca. 1500 items. L-BFGS fails to converge to
attain decent accuracy in any reasonable amount decent accuracy.
of time, while Newton-Raphson converges reliably. Such inner reliability is key to reaching our goal of fully automated learning in an industrial
system. In conclusion, while the lack of public code for [6] precludes a direct comparison, their approach, while partly more advanced, should be limited to smaller problems, shorter forecast horizons,
and would be hard to run in an industrial setting.
6
5
4
3
gradient norm
2
1
0
?1
?2
?3
?4
?5
?6
?7
5
3
4
5
Experiments
In this section, we present experimental results, comparing variants of our approach to related work.
5.1
Out of Stock Treatment
With a large and growing inventory, a fraction of items is out of stock at any given time, meaning
that order fulfillments are delayed or do not happen at all. When out of stock, an item cannot be sold
Zero-inflation, p0 I{zt =0} + (1 ? p0 )P 0 (zt |yt ), destroys log-concavity for zt = 0.
The inner problem is convex, its criterion is efficiently implemented (no dependence on foreign code). The
situation in [6] is likely more difficult.
5
6
5
(zt = 0), yet may still elicit considerable customer demand. The probabilistic nature of latent state
forecasting renders it easy to use out of stock information. If an item is not in stock at day t, the data
zt = 0 is explained away, and the corresponding likelihood term should be dropped. As noted in
Section 3.1, this presents no difficulty in our framework.
Sep 2015
Jun 2015
Mar 2015
Dec 2014
Sep 2014
Jun 2014
Mar 2014
Dec 2013
Sep 2015
Jun 2015
Mar 2015
Dec 2014
Sep 2014
Jun 2014
unobservedDays
Mar 2014
Dec 2013
unobservedDays
Figure 2: Demand forecast for an item which is partially out of stock. Each panel: Training range
left (green), prediction range right (red), true targets black. In color: Median, P10 to P90. Bottom:
Out of stock (? 80% of day) marked in red. Left: Out of stock signal ignored. Demand forecast
drops to zero, strong underbias in prediction range. Right: Out of stock regions treated as missing
observations. Demand becomes uncertain in out of stock region. No underbias in prediction range.
In Figure 2, we show demand forecasts for an item which is out of stock during certain periods in
the training range. It is obvious that ignoring the out of stock signal leads to systematic underbias
(since zt = 0 is interpreted as ?no demand?). This underbias is corrected for by treating out of stock
regions as having unobserved targets. Note that an item may be partially out of stock during a day,
still creating some sales. In such cases, we could treat zt as unobserved, but lower-bounded by the
sales, and an expectation maximization extension may be applied. However, such situations are
comparatively rare in our data (compared to full-day out of stock). In the rest of this section, latent
state forecasting is taking out of stock information into account.
5.2
Comparative Study
We present experimental results obtained on a number of datasets, containing intermittent counts
time series. Parts contains monthly demand of spare parts at a US automobile company, is publicly
available, and was previously used in [10, 15, 6]. Further results are obtained on internal daily
e-commerce sales data. In either case, we subsampled the sets in a stratified manner from a larger
volume used in our production setting. EC-sub is medium size and contains fast and medium moving
items. EC-all is a large dataset (more than 500K items, 150M item-days), being the union of
EC-sub with items which are slower moving. Properties of these datasets are given in Figure 3, top
left. Demand is highly intermittent and bursty in all cases, as witnessed by a large CV 2 and a high
proportion of zt = 0: these properties are typical for supply chain data. Not only is EC-all much
larger than any public demand forecasting dataset we are aware of, our internal datasets consists of
longer series (up to 10?) and are more bursty than Parts.
The following methods are compared. ETS is exponential smoothing with Gaussian additive errors
and automatic model selection, a frequently used R package [9]. NegBin is our implementation of
the negative binomial damped dynamic variant of [15]. We consider two variants of our latent state
forecaster: LS-pure without features, and LS-feats with a feature vector xt (basic seasonality,
kernels at holidays, price changes, out of stock). Predictive distributions are represented by 100
samples over the prediction range (length 8 for Parts, length 365 for others). We employ quadratic
regularization for all methods except ETS (see Section 3.2). Hyperparameters consist of regularization
constants ?j and centers ??j (full details are given in the supplemental report). We tune7 such
parameters on random 10% of the data, evaluating test results on the remaining 90%. For LS-pure
and LS-feats, we use two sets of tuned hyperparameters on the largest set EC-all: one for the
EC-sub part, the other for the rest.
Our metrics quantify the forecast accuracy of certain quantiles of predictive distributions. They
are defined in terms of spans [L, L + S) in the prediction range, where L are lead times. In
general, we ignore days when items are out of stock (see Figure 3, top left, for in-stock ratios).
7
We found that careful hyperparameter tuning is important for obtaining good results, also for NegBin.
In contrast, regularization is not even mentioned in [15] (our implementation of NegBin includes the same
quadratic regularization as for our methods).
6
PL+S?1
If ?it = I{i in stock at t} , define Zi;(L,S) = t=L ?it zit . For ? ? (0, 1), the predicted ?-quantile
?
of Zi;(L,S) is denoted by Z?i;(L,S)
. These predictions are obtained from the sample paths by first
summing over the span, then estimating the quantile by way of sorting. The ?-quantile loss8 is
defined as L? (z, z?) = 2(z ? z?)(?I{z>?z} ? (1 ? ?)I{z??z} ). The P(? ? 100) risk metric for [L, L + S)
P
is defined as R? [I; (L, S)] = |I|?1
L? (Zi;(L,S) , Z? ?
), where the left argument Zi;(L,S)
i?I
i;(L,S)
is computed from test targets.9 We focus on P50 risk (? = 0.5; mean absolute error) and P90 risk
(? = 0.9; the 0.9-quantile is often relevant for automated ordering).
EC-sub
EC-all
19874
month
2.4
54%
100%
33
656K
39700
day
5.8
46%
73%
329
13M
534884
day
9.7
83%
71%
293
157M
(b)
t1
4
v
No
14
c
De
14
n
Ja
15
b
Fe
15
M
ar
15
r
Ap
15
ay
M
15
n
Ju
15
l
Ju
15
g
Au
ETS
NegBin
LS-pure
LS-feats
true demand
(c)
ETS
NegBin
LS-pure
LS-feats
P90 risk
ETS
NegBin
LS-pure
LS-feats
P50 risk
Oc
(a)
sum units
Parts
# items
Unit t
Median CV 2
Freq. zt = 0
In-stock ratio
Avg. size series
# item-days
15
4
t1
Oc
v
No
14
c
De
14
n
Ja
15
b
Fe
15
ar
M
15
5
r1
Ap
ay
M
15
n
Ju
15
l
Ju
15
g
Au
15
Figure 3: Table: Dataset properties. CV 2 = Var[zt ]/E[zt ]2 measures burstiness. (a): Sum of
weekly P50 point (median) forecast over a one-year prediction range for the different methods
(lines) as well as sum of true demand (shaded area), on dataset I = EC-sub. (b): Weekly P50 risk
R0.5 [I; (7 ? k, 7)], k = 0, 1, . . . , for same dataset. (c): Same as (b) for P90 risk.
We plot the P50 and P90 risk on dataset EC-sub, as well the sum of P50 point (median) forecast and
the true demand, in the three panels of Figure 3. All methods work well in the first week, but there
are considerable differences further out. Naturally, losses are highest during the Christmas peak sales
period. LS-feats strongly outperforms all others in this critical region (see Figure 3, top right), by
means of its features (holidays, seasonality). The Gaussian predictive distributions of ETS exhibit
growing errors over time. With the exception of the Christmas period, NegBin works rather well (in
particular in P50 risk), but is uniformly outperformed by both LS-pure, and LS-feats in particular.
A larger range of results are given in Table 1 (Parts, EC-sub) and Table 2 (EC-all), where numbers
are relative to NegBin. Note that the R code for ETS could not be run on the large EC-all. On
Parts, NegBin works best, yet LS-pure comes close (we did not use features on this dataset). On
EC-sub, LS-feats outperforms all others in all scenarios. The featureless NegBin and LS-pure are
comparable on this dataset. On the largest set EC-all, LS-feats generally outperforms the others,
but differences are smaller.
Finally, we report running times of parameter learning (outer optimization) for LS-feats on EC-sub.
L-BFGS was run with maxIters = 55, gradTol = 10?5 . Our experimental cluster consists of
about 150 nodes, with Intel Xeon E5-2670 CPUs (4 cores) and 30GB RAM. Profiling was done
separately in each stage: k = 0 (P 5 = 0.180s, P 50 = 1.30s, P 95 = 2.15s), k = 1 (P 5 = 0.143s,
P 50 = 1.11s, P 95 = 1.79s), k = 2 (P 5 = 0.138s, P 50 = 1.29s, P 95 = 3.25s). Here, we
quote median (P50), 5% and 95% percentiles (P5, P95). The largest time recorded was 10.4s. The
narrow spread of these numbers witnesses the robustness and predictability of the nested optimization
process, crucial properties in the context of production systems running on parallel compute clusters.
8
9
EZ [L? (Z, z?)] is minimized by the ?-quantile. Also, L0.5 (z, z?) = |z ? z?|.
P
More precisely, we filter I before use in R? [I; (L, S)]: I 0 = {i ? I | L+S?1
?it ? 0.8S}.
t=L
7
(L, S)
ETS
LS-pure
LS-feats
NegBin
Parts
P90 risk
P50 risk
(0, 2) dy(8) (0, 2) dy(8)
1.04
1.08
?
1.00
1.04
1.06
?
1.00
1.19
1.04
?
1.00
EC-sub
(0, 56)
P90 risk
(21, 84)
wk(33)
(0, 56)
P50 risk
(21, 84)
wk(33)
0.99
1.07
0.80
1.00
0.75
0.97
0.73
1.00
1.13
0.99
0.85
1.00
1.07
0.95
0.84
1.00
1.10
1.03
0.84
1.00
1.18
0.99
0.94
1.00
1.38
1.06
?
1.00
Table 1: Results for dataset Parts (left) and EC-sub (right). Metric values relative to NegBin
(each column). dy(8): Average of R? [I; (k, 1)], k = 0, . . . , 7. wk(33): Average of R? [I; (7 ? k, 7)],
k = 0, . . . , 32.
(L, S)
LS-pure
LS-feats
NegBin
(0, 56)
P90 risk
(21, 84)
wk(33)
(0, 56)
P50 risk
(21, 84)
wk(33)
1.11
0.95
1.00
1.03
0.86
1.00
0.99
0.89
1.00
1.00
0.92
1.00
1.03
0.88
1.00
1.05
0.98
1.00
Table 2: Results for dataset EC-all. Metric values relative to NegBin (each column). ETS could not
be run at this scale.
6
Conclusions. Future Work
In this paper, we developed a framework for maximum likelihood learning of probabilistic latent
state forecasting models, which can be seen as principled time series extensions of generalized linear
models. We pay special attention to the intermittent and bursty statistics of demand, characteristic for
the vast inventories maintained by large retailers or e-commerce platforms. We show how approximate
Bayesian inference techniques can be implemented in a robust and highly scalable way, so to enable
a forecasting system which runs safely on hundred of thousands of items and hundreds of millions of
item-days.
We can draw some conclusions from our comparative study on a range of real-world datasets. Our
proposed method strongly outperforms competitors on sales data from fast and medium moving
items. Besides good short term forecasts due to temporal smoothness and well-calibrated growth of
uncertainty, our use of a feature vector seems most decisive for medium term forecasts. On slow
moving items, simpler methods like NegBin [15] are competitive, even though they lack signal
models which could be learned from data.
We are investigating several directions for future work. Our current system uses time-independent
ISSMs, in particular g t = [?] means that the same amount of innovation variance is applied every day.
This assumption is violated by our data, where a lot more variation happens in the weeks leading up
to Christmas or before major holidays than during the rest of the year. To this end, we are exploring
learning two parameters: ?h during high-variation periods, and ?l for all remaining days. We also
plan to augment the state lt by seasonality10 factors [10, Sect. 14] (both at , g t depend on time then).
One of the most important future directions is to learn about and exploit dependencies between
the demand time series of different items. In fact, the strategy to learn and forecast each item
independently is not suitable for items with a short demand history, or for slow moving items. One
approach we pursue is to couple latent processes by a shared (global) linear or non-linear function.
Acknowledgements
We would like to thank Maren Mahsereci for determining the running time figures, and the Wupper
team for all the hard work without which this paper would not have happened.
10
Currently, periodic seasonality is dealt with by features in xt .
8
References
[1] D. Barber. Expectation correction for smoothing in switching linear Gaussian state space
models. Journal of Machine Learning Research, 7:2515?2540, 2006.
[2] D. Barber, T. Cemgil, and S. Chiappa. Bayesian Time Series Models. Cambridge University
Press, 1st edition, 2011.
[3] M. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Unit,
UCL, 2003.
[4] C. Bishop. Pattern Recognition and Machine Learning. Springer, 1st edition, 2006.
[5] G. Box, G. Jenkins, and G. Reinsel. Time Series Analysis: Forecasting and Control. John Wiley
& Sons, 4th edition, 2013.
[6] N. Chapados. Effective Bayesian modeling of groups of related count time series. In E. Xing
and T. Jebara, editors, International Conference on Machine Learning 31, pages 1395?1403.
JMLR.org, 2014.
[7] J. Durbin and S. Koopman. Time Series Analysis by State Space Methods. Oxford Statistical
Sciences. Oxford University Press, 2nd edition, 2012.
[8] Tom Heskes and Onno Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial
Intelligence 18. Morgan Kaufmann, 2002.
[9] R. Hyndman and Y. Khandakar. Automatic time series forecasting: the forecast package for R.
Journal of Statistical Software, 26(3):1?22, 2008.
[10] R. Hyndman, A. Koehler, J. Ord, and R. Snyder. Forecasting with Exponential Smoothing: The
State Space Approach. Springer, 1st edition, 2008.
[11] P. McCullach and J.A. Nelder. Generalized Linear Models. Number 37 in Monographs on
Statistics and Applied Probability. Chapman & Hall, 1st edition, 1983.
[12] T. Minka. Expectation propagation for approximate Bayesian inference. In J. Breese and
D. Koller, editors, Uncertainty in Artificial Intelligence 17. Morgan Kaufmann, 2001.
[13] H. Rue and S. Martino. Approximate Bayesian inference for latent Gaussian models by using
integrated nested Laplace approximations. Journal of Roy. Stat. Soc. B, 71(2):319?392, 2009.
[14] L. Snyder and Z. Shen. Fundamentals of Supply Chain Theory. John Wiley & Sons, 1st edition,
2011.
[15] R. Snyder, J. Ord, and A. Beaumont. Forecasting the intermittent demand for slow-moving
inventories: A modelling approach. International Journal on Forecasting, 28:485?496, 2012.
[16] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. Franklin, S. Shenker,
and I. Stoica. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster
computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and
Implementation (NSDI), page 2, 2012.
9
| 6313 |@word inversion:1 seems:1 norm:1 nd:1 proportion:1 zit:4 forecaster:2 crucially:1 p0:2 pick:1 harder:1 reduction:4 series:13 contains:3 tuned:2 ours:1 franklin:1 outperforms:5 current:2 comparing:2 z2:1 yet:4 flunkert:2 written:1 john:2 additive:1 happen:1 enables:1 drop:2 treating:1 plot:1 v:1 intelligence:2 item:34 short:7 core:1 provides:1 iterates:1 node:1 org:1 simpler:1 along:1 direct:1 supply:3 consists:2 fitting:2 combine:1 darwiche:1 manner:1 introduce:2 rapid:1 planning:1 nor:1 multi:9 frequently:2 growing:2 company:1 resolve:1 cpu:1 precursor:1 solver:1 becomes:1 spain:1 begin:1 moreover:1 bounded:1 panel:2 medium:7 estimating:1 what:1 argmin:1 interpreted:1 pursue:1 developed:2 supplemental:3 unobserved:6 finding:4 differentiation:1 beaumont:1 temporal:4 safely:1 every:2 concave:3 growth:2 weekly:2 exactly:1 sensibly:1 sale:7 unit:3 control:1 intervention:1 positive:1 t1:2 dropped:1 local:2 treat:1 before:2 cemgil:1 switching:1 encoding:1 ets:9 oxford:2 path:3 ap:2 black:1 twice:3 au:2 shaded:1 nsdi:1 limited:2 stratified:1 range:12 commerce:6 union:1 optimizers:1 procedure:1 area:1 empirical:1 elicit:1 attain:1 alleviated:1 cannot:1 close:1 selection:1 context:3 catalogue:1 risk:15 map:1 deterministic:3 center:2 yt:34 maximizing:2 missing:2 attention:2 demonstrated:1 starting:2 convex:4 go:1 l:22 independently:1 amazon:4 spark:1 shen:1 pure:10 importantly:1 variation:2 holiday:6 laplace:10 limiting:1 target:6 play:1 exact:1 us:2 probability1:1 trend:1 roy:1 approximated:1 expensive:1 recognition:1 predicts:1 observed:2 role:1 bottom:1 p5:1 solved:2 thousand:5 region:4 sect:5 ordering:2 eu:1 highest:2 burstiness:1 substantial:1 mentioned:1 environment:1 predictable:1 principled:1 monograph:1 dynamic:3 depend:1 predictive:5 efficiency:1 selling:1 sep:4 indirect:3 stock:22 represented:3 customer:1 regularizer:1 fast:4 effective:2 monte:1 artificial:2 aggregate:1 salina:1 whose:2 heuristic:1 larger:5 widely:1 taylored:1 solve:1 drawing:2 otherwise:2 say:2 precludes:1 grave:1 statistic:7 final:2 beal:1 matthias:1 ucl:1 propose:2 instable:1 causing:1 relevant:1 combining:1 networked:1 scalability:3 convergence:1 cluster:3 requirement:1 r1:1 produce:2 comparative:2 converges:1 wider:1 depending:1 develop:1 stat:1 chiappa:1 paying:1 strong:1 soc:1 implemented:3 predicted:1 skip:1 implies:1 come:2 quantify:1 sizable:2 direction:2 drawback:2 closely:1 filter:1 enable:3 public:2 spare:1 require:1 ja:2 resilient:1 fix:1 extension:4 pl:1 exploring:1 correction:1 inflation:3 considered:1 around:1 hall:1 bursty:9 mapping:1 week:3 predict:1 driving:3 major:2 outperformed:1 halloween:1 applicable:1 currently:1 quote:1 largest:3 tool:1 promotion:2 clearly:1 destroys:1 gaussian:16 reaching:1 rather:1 reinsel:1 publication:1 l0:6 focus:1 martino:1 modelling:2 likelihood:36 seeger:1 industrial:4 contrast:3 baseline:2 inference:16 abstraction:1 foreign:1 cubically:1 bt:4 integrated:2 transferring:1 koller:1 germany:1 simple2:1 issue:1 classification:2 denoted:1 augment:1 development:1 plan:1 smoothing:21 platform:4 special:2 art:1 constrained:1 equal:1 once:1 aware:1 having:1 chapman:1 future:4 minimized:1 t2:7 report:5 others:4 employ:3 few:2 resulted:1 delayed:1 subsampled:1 argmax:1 intended:1 friedman:1 huge:2 interest:1 highly:3 chowdhury:1 evaluation:2 damped:3 chain:4 amenable:1 emit:1 daily:1 shorter:1 mccauley:1 damping:1 unless:2 taylor:2 re:1 fitted:1 uncertain:1 instance:2 witnessed:1 xeon:1 column:2 modeling:1 tial:1 ar:2 maximization:1 cost:1 applicability:1 rare:1 hundred:6 usefulness:2 valentin:1 dependency:2 periodic:1 calibrated:1 ju:4 st:5 peak:1 international:2 fundamental:1 sequel:1 probabilistic:3 systematic:1 thesis:1 central:1 recorded:1 management:2 containing:1 choose:1 creating:1 expert:1 derivative:1 leading:1 account:1 potential:2 koopman:1 de:5 bfgs:8 wk:5 includes:1 depends:2 decisive:1 stoica:1 root:1 lot:2 red:2 competitive:1 xing:1 zoeter:1 parallel:2 maren:1 contribution:3 publicly:2 accuracy:4 variance:1 characteristic:1 efficiently:1 chapados:1 kaufmann:2 generalize:1 dealt:1 bayesian:12 none:1 carlo:1 dave:1 history:1 against:1 grossly:1 failure:1 competitor:1 minka:1 obvious:1 naturally:1 associated:1 couple:1 sampled:1 dataset:11 treatment:1 recall:4 color:1 back:1 day:20 methodology:2 tension:1 tom:1 evaluated:1 done:1 strongly:3 mar:4 though:1 box:1 stage:24 correlation:1 glms:4 d:4 sketch:1 lack:4 propagation:5 continuity:2 logistic:6 mode:8 grows:1 true:4 remedy:1 facility:1 former:2 analytically:2 regularization:4 freq:1 during:5 width:1 onno:1 maintained:2 noted:2 percentile:1 oc:2 criterion:6 generalized:8 m:1 ay:2 demonstrate:1 p50:11 reasoning:1 meaning:2 variational:2 novel:8 sigmoid:1 behaves:3 apache:1 volume:1 million:3 shenker:1 numerically:1 monthly:1 cambridge:1 cv:3 smoothness:1 automatic:2 vanilla:1 tuning:1 heskes:1 cancellation:1 reliability:1 moving:7 longer:4 operating:1 add:1 curvature:2 posterior:6 scenario:2 certain:2 negbin:15 retailer:1 binary:2 fault:1 p10:2 seen:1 additional:1 morgan:2 ey:4 r0:1 novelty:2 maximize:1 converge:3 period:4 signal:3 smoother:1 multiple:1 full:4 technical:1 characterized:1 profiling:1 raphson:10 long:1 prediction:14 scalable:2 hyndman:2 regression:2 variant:4 underlies:1 expectation:7 poisson:11 basic:1 metric:4 iteration:2 represent:1 kernel:2 invert:1 dec:4 proposal:3 separately:2 grow:2 median:6 crucial:2 operate:1 rest:3 regularly:1 emitted:1 matthis:1 easy:1 decent:2 automated:7 marginalization:1 fit:2 zi:4 competing:1 inner:8 reduce:1 gb:1 forecasting:25 suffer:1 render:1 proceed:1 hessian:1 cause:2 ignored:1 generally:1 detailed:2 maybe:1 amount:4 reduced:3 outperform:1 seasonality:5 happened:1 overly:1 per:2 write:1 hyperparameter:1 arima:1 snyder:3 group:1 key:3 neither:1 ram:1 vast:1 fraction:1 sum:4 year:2 run:9 package:2 uncertainty:3 reasonable:3 draw:3 decision:1 dy:3 scaling:1 comparable:1 pay:1 followed:1 guaranteed:1 quadratic:7 durbin:1 adapted:1 precisely:2 software:2 argument:1 extremely:2 span:2 performing:1 conjecture:1 combination:4 across:1 terminates:1 smaller:2 son:2 island:1 making:2 happens:1 explained:1 glm:8 previously:2 count:5 fail:1 end:1 available:2 jenkins:1 apply:1 occasional:1 hierarchical:1 away:1 robustly:3 robustness:6 slower:1 binomial:4 running:5 include:2 top:3 remaining:2 newton:12 neglect:1 exploit:1 quantile:5 classical:1 comparatively:1 question:1 koehler:1 strategy:1 fulfillment:1 dependence:1 surrogate:1 said:1 exhibit:3 gradient:3 separate:1 thank:1 berlin:1 outer:3 barber:2 collected:1 reason:1 kalman:13 code:3 length:2 besides:1 ratio:2 minimizing:1 innovation:3 difficult:3 setup:3 fe:2 relate:1 negative:6 implementation:5 reliably:1 zt:38 design:1 ord:2 observation:2 datasets:7 markov:1 sold:1 situation:2 extended:1 looking:1 witness:1 team:1 prematurely:1 intermittent:18 jebara:1 usenix:1 david:1 required:1 z1:1 learned:3 narrow:2 barcelona:1 nip:1 address:1 beyond:2 dynamical:1 pattern:1 challenge:1 green:1 memory:1 critical:1 suitable:1 difficulty:3 treated:1 indicator:2 advanced:2 technology:1 jun:4 prior:5 literature:1 acknowledgement:1 determining:1 relative:3 fully:3 loss:1 zaharia:1 versus:1 var:1 degree:1 affine:1 sufficient:1 s0:1 editor:3 production:2 course:1 free:1 side:1 allow:2 fall:1 taking:2 absolute:1 sparse:1 distributed:1 world:8 stand:1 evaluating:2 concavity:2 forward:2 commonly:2 avg:1 ec:19 polynomially:1 approximate:10 ignore:1 feat:12 christmas:4 global:1 active:3 investigating:1 tolerant:1 summing:1 nelder:1 robustifies:1 search:1 latent:19 iterative:1 decomposes:1 anchored:1 reviewed:1 table:5 nature:1 learn:4 transfer:11 robust:4 reasonably:1 operational:1 ca:1 ignoring:1 obtaining:1 e5:1 inventory:6 automobile:1 rue:1 da:1 did:1 main:1 spread:1 linearly:2 whole:1 featureless:1 arise:2 hyperparameters:2 edition:7 caters:1 intel:1 quantiles:2 mahsereci:1 gatsby:1 slow:3 predictability:1 wiley:2 surveyed:1 fails:4 sub:11 exponential:9 lie:1 ppoi:1 jmlr:1 erroneous:1 xt:4 bishop:1 evidence:1 essential:5 intractable:1 consist:1 phd:1 magnitude:2 demand:36 forecast:18 horizon:1 sorting:1 generalizing:1 lt:11 simply:1 explore:1 lbfgs:1 likely:1 ez:1 partially:2 springer:2 corresponds:1 nested:3 ma:1 conditional:1 month:2 goal:2 marked:1 careful:1 price:3 replace:3 shared:2 change:3 hard:2 considerable:2 determined:1 typical:1 operates:1 corrected:1 except:1 uniformly:1 total:1 breese:1 pas:2 partly:1 e:1 experimental:3 exception:1 internal:2 latter:3 violated:2 evaluate:2 |
5,874 | 6,314 | Convex Two-Layer Modeling with Latent Structure
Vignesh Ganapathiraman? ,
Xinhua Zhang? ,
Yaoliang Yu? ,
Junfeng Wen]
?
University of Illinois at Chicago, Chicago, IL, USA
?
University of Waterloo, Waterloo, ON, Canada, ] University of Alberta, Edmonton, AB, Canada
{vganap2, zhangx}@uic.edu, [email protected], [email protected]
Abstract
Unsupervised learning of structured predictors has been a long standing pursuit
in machine learning. Recently a conditional random field auto-encoder has been
proposed in a two-layer setting, allowing latent structured representation to be
automatically inferred. Aside from being nonconvex, it also requires the demanding
inference of normalization. In this paper, we develop a convex relaxation of
two-layer conditional model which captures latent structure and estimates model
parameters, jointly and optimally. We further expand its applicability by resorting
to a weaker form of inference?maximum a-posteriori. The flexibility of the model
is demonstrated on two structures based on total unimodularity?graph matching
and linear chain. Experimental results confirm the promise of the method.
1
Introduction
Over the past decade deep learning has achieved significant advances in many application areas [1].
By automating the acquisition of latent descriptive and predictive representation, they provide highly
effective models to capture the relationships between observed variables. Recently more refined deep
models have been proposed for structured output prediction, where several random variables for
prediction are statistically correlated [2?4]. Improved performance has been achieved in applications
such as image recognition and segmentation [5] and natural language parsing [6], amongst others.
So far, most deep models for structured output are designed for supervised learning where structured
labels are available. Recently an extension has been made to the unsupervised learning. [7] proposed
a conditional random field auto-encoder (CRF-AE)?a two-layer conditional model?where given
the observed data x, the latent structure y is first generated based on p(y|x), and then applied to
reconstruct the observations using p(x|y). The motivation is to find the predictive and discriminative
(rather than common but irrelevant) latent structure in the data. Along similar lines, several other
discriminative unsupervised learning methods are also available [8?11].
Extending auto-encoders X ? Y ? X to general two-layer models X ? Y ? Z is not hard. [12, 13]
addressed transliteration between two languages, where Z is the observed binary label indicating if
two words match, and higher accuracy can be achieved if we faithfully recover a letter-wise matching
represented by the unobserved structure Y . In essence, their model optimizes p(z|arg maxy p(y|x)),
uncovering the latent y via its mode under the first layer model. This is known as bi-level optimization
because the arg max of inner optimization is used. A soft variant adopts the mean of y [14]. In
general, conditional models yield more accurate predictions than generative models X ?Y ?Z (e.g.
multi-wing harmoniums/RBMs), unless the latter is trained in a discriminative fashion [15].
In computation, all methods require certain forms of tractability in inference. CRF-AE leverages
marginal inference on p(y|x)p(x|y) (over y) for EM. Contrastive divergence, instead, samples
from p(y|x) [11]. For some structures like graph matching, neither of them is tractable [16, 17]
(unless assuming first-order Markovian). In single-layer models, this challenge has been resolved
by max-margin estimation, which relies only on the MAP of p(y|x) [18]. This oracle is much less
demanding than sampling or normalization, as finding the most likely state can be much easier than
summarizing over all possible y. For example, MAP for graph matching can be solved by max-flow.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Unfortunately a direct extension of max-margin estimation to two-layer modeling meets with immediate obstacles, because here one has to solve maxy p(y|x)p(z|y). In general, p(z|y) depends on y
in a highly nonlinear form, making this augmented MAP inference intractable. This seems to leave
the aforementioned bi-level optimization the only option that retains the sole dependency on MAP.
However, solving this optimization poses substantial challenge when y is discrete, because the mode
of p(y|x) is almost always invariant to small perturbations of model parameters (i.e. zero gradient).
In this paper we demonstrate that this optimization can be relaxed into a convex formulation while
still preserving sufficient regularities to recover a non-trivial, nonlinear predictive model that supports
structured latent representations. Recently a growing body of research has investigated globally
trainable deep models. But they remain limited. [19] formulated convex conditional models using
layer-wise kernels, connected through nonlinear losses. However these losses are data dependent,
necessitating a transductive setting to retain the context. [20] used boosting but the underlying oracle
is generally intractable. Specific global methods were also proposed for polynomial networks [21]
and sum-product networks [22]. None of these methods accommodate structures in latent layers.
Our convex formulation is achieved by enforcing the first-order optimality conditions of inner level
optimization via sublinear constraints. Using a semi-definite relaxation, we amount to the first
two-layer model that allows latent structures to be inferred concurrently with model optimization
while still admitting globally optimal solutions (?3). To the best of our knowledge, this is the first
algorithm in machine learning that directly constructs a convex relaxation for a bi-level optimization
based on the inner optimality conditions. Unlike [19], it results in a truly inductive model, and its
flexibility is demonstrated with two example structures in the framework of total unimodularity (?4).
The only inference required is MAP on p(y|x), and the overall scalability is further improved by a
refined optimization algorithm (?5). Experimental results demonstrate its useful potentials in practice.
2
Preliminaries and Background
We consider a two-layer latent conditional model X ? Y ? Z, where X is the input, Z is the
output, and Y is a latent layer composed of h random variables: {Yi }hi=1 . Instead of assuming no
interdependency between Yi as in [19], our major goal here is to model the structure in the latent layer
Y . Specifically, we assume a conditional model for the first layer based on an exponential family
p(y|x) = q0 (y) exp(?y0 U x ? ?(U x)), where q0 (y) = Jy ? YK .
(1)
Here U is the first layer weight matrix, and ? is the log-partition function. q0 (y) is the base measure,
with JxK = 1 if x is true, and 0 otherwise. The correlation among Yi is instilled by the support set
Y, which plays a central role here. For example, when Y consists of all h-dimensional canonical
vectors, p(y|x) recovers the logistic multiclass model. In general, to achieve a tradeoff between
computational efficiency and representational flexibility, we make the following assumptions on Y:
Assumption 1 (PO-tractable). We assume Y is bounded, and admits an efficient polar operator. That
is, for any vector d ? Rh , miny?Y d0 y is efficiently solvable.
Note the support set Y (hence the base measure q0 ) is fixed and does not contain any more parameter.
PO-tractability is available in a variety of applications, and we give two examples here.
Graph matching. In a bipartite graph with two sets of vertices {ai }ni=1 and {bj }nj=1 , each edge
between ai and bj has a weight Tij . The task is to find a one-to-one mapping (can be extended)
between {ai } and {bj }, such that the sum of weights on the edges is maximized. Denote the matching
by Y ? {0, 1}n?n where Yij = 1 iff the edge (ai , bj ) is selected. So the optimal matching is the mode
of p(Y ) ? JY ? YK exp(tr(Y 0 T )) where the support is Y = {Y ? {0, 1}n?n : Y 1 = Y 0 1 = 1}.
Graphical models. For simplicity, consider a linear chain model V1 ? V2 ? ? ? ? ? Vp . Here each
Vi can take one of C possible values, which we encode using the C-dimensional canonical basis
vi . Suppose there is a node potential mi ? RC for each Vi , and each edge (Vi , Vi+1 ) has an edge
potential Mi ? RC?C . Then we could directly define a distribution on {Vi }. Unfortunately, it
will involve quadratic terms such as vi0 Mi vi+1 , and so a different parameterization is in order. Let
Yi ? {0, 1}C?C encode the values of (Vi , Vi+1 ) via row and column indices of Yi respectively. Then
the distribution on {Vi } can be equivalently represented by a distribution on {Yi }:
X
Xp?1
p
p({Yi }) ? J{Yi } ? YK exp
m0i Yi 1 +
tr(Mi0 Yi ) ,
(2)
i=1
i=1
where Y = {Yi } : Yi ? {0, 1}C?C ? H, with H := {{Yi } : 10 Yi 1 = 1, Yi0 1 = Yi+1 1} . (3)
2
The constraints in H encode the obvious consistency constraints between overlapping edges. This
model ultimately falls into our framework in (1).
In both examples, the constraints in Y are totally unimodular (TUM), and therefore the polar operator
can be computed by solving a linear programming (LP), with the {0, 1} constraints relaxed to [0, 1].
In ?4.1 and 4.2 we will generalize y0 U x to y0 d(U x), where d is an affine function of U x that allows
for homogeneity in temporal models. For clarity, we first develop a general framework using y0 U x.
Output layer As for the output layer, we assume a conditional model from an exponential family
p(z|y) = exp(z0 R0 y ? G(R0 y))q1 (z) = exp(?DG? (z||?G(R0 y)) + G? (z))q1 (z),
(4)
where G is a smooth and strictly convex function, and DG? is the Bregman divergence induced by the
Fenchel dual G? . Such a parameterization is justified by the equivalence between regular Bregman
divergence and regular exponential family [23]. Thanks to the convexity of G, it is trivial to extend
p(z|y) to y ? convY (the convex hull of Y), and G(R0 y) will still be convex over convY (fixing R).
Finally we highlight the assumptions we make and do not make. First we only assume PO-tractability
of Y, hence tractability in MAP inference of p(y|x). We do not assume it is tractable to compute the
normalizer ? or its gradient (marginal distributions). We also do not assume that unbiased samples
of y can be drawn efficiently from p(y|x). In general, PO-tractability is a weaker assumption. For
example, in graph matching MAP inference is tractable while marginalization is NP-hard [16] and
sampling requires MCMC [24]. Finally, we do not assume tractability of any sort for p(y|x)p(z|y)
(in y), and so it may be hard to solve miny?Y {d0 y + G(R0 y) ? z0 R0 y}, as G is generally not affine.
2.1
Training principles
At training time, we are provided with a set of feature-label pairs (x, z) ? p?, where p? is the empirical
distribution. In the special case of auto-encoder, z is tied with x. The ?bootstrapping" style estimation
[25] optimizes
in an optimistic fashion
the joint likelihood with
the latent y imputed
min
E
min ? log p(y|x)p(z|y) = min
U,R (x,z)?p? y?Y
min y0 U x + ?(U x) ? z0 R0 y + G(R0 y) .
E
U,R (x,z)?p? y?Y
This results in a hard EM estimation, and a soft version can be achieved by adding entropic regularizers
on y. Regularization can be imposed on U and R which we will make explicit later (e.g. bounding the
L2 norm). Since the log-partition function ? in p(y|x) is hard to compute, the max-margin approach
is introduced which replaces ?(U
y0 U x, leading
x)by an upper bound maxy? ?Y ??
to a surrogate loss
min
?0U x
min ?z0 R0 y + G(R0 y) + y0 U x ? min y
E
? ?Y
y
U,R (x,z)?p? y?Y
.
(5)
However, the key disadvantage of this method is the augmented inference on y, because we have only
assumed the tractability of miny?Y d0 y for all d, not miny?Y {y0 d + G(R0 y) ? z0 R0 y}. In addition,
this principle intrinsically determines the latent y as a function of both the input and the output,
while at test time the output itself is unknown and is the subject of prediction. The common practice
therefore requires a joint optimization over y and z at test time, which is costly in computation.
The goal of this paper is to design a convex formulation in which the latent y is completely determined by the input x, and both the prediction and estimation rely only on the polar operator:
arg miny?Y y0 U x. As a consequence of this goal, it is natural to postulate that the y found this way
renders an accurate prediction of z, or a faithful recovery of x in auto-encoders. This idea, which has
been employed by[e.g.,
9, 26], leads to the following
bi-level optimization
problem
max E
log p zarg max p(y|x)
? max E
log p zarg min y0 U x
(6)
U,R (x,z)?p?
? min
E
U,R (x,z)?p?
[?z
y?Y
0
R0 yx?
+
G(R0 yx? )] ,
U,R (x,z)?p?
where
yx?
y?Y
0
= arg min y U x.
y?Y
(7)
Directly solving this optimization problem is challenging, because the optimal yx? is almost surely
invariant to small perturbations of U (e.g. when Y is discrete). So a zero valued gradient is witnessed
almost everywhere. Therefore a more carefully designed optimization algorithm is in demand.
3
A General Framework of Convexification
We propose addressing this bi-level optimization by convex relaxation, and it is built upon the
first-order optimality conditions of the inner-level optimization. First notice that the set Y participates
3
in the problem (7) only via the polar operator at U x: arg miny?Y y0 U x. If Y is discrete, this
problem is equivalent to optimizing over S := convY, because a linear function on a convex set is
always optimized on its extreme points. Clearly, S is convex, bounded, closed, and is PO-tractable.
It is important to note that the origin is not necessarily contained in S. To remove the potential
non-uniqueness of the minimizer in (7), we next add a small proximal term to the polar operator
problem (? is a small positive number):
?
2
min w0 U x + kwk .
(8)
w?S
2
This leads to a small change in the problem and makes sure that the minimizer is unique.1 Adding
strongly convex terms to the primal and dual objectives is a commonly used technique for accelerated
optimization [27], and has been used in graphical model inference [e.g., 28]. We intentionally
changed the symbol y into w, because here the optimal w is not necessarily in Y.
By the convexity of the problem (8) and noting that the gradient of the objective is U x + ?w, w is
optimal if and only if
w ? S, and (U x + ?w)0 (?? ? w) ? 0, ??? ? S.
(9)
These optimality conditions can be plugged into the bi-level optimization problem (7). Introducing
? to enforce the latter condition via a mini-max formulation, we obtain
?Lagrange multipliers? (?, ?)
h
min min
min max max ?z0 R0 w + v0 R0 w ? G? (v)
(10)
E
w ??0,??S
v
?
kU k?1 kRk?1 (x,z)?p?
i
0
? ,
+ ?S (w) + ?(U x + ?w) (w ? ?)
(11)
where ?S is the {0, ?}-valued indicator function of the set S. Here we dualized G via G(R0 w) =
maxv v0 R0 w ? G? (v), and made explicit the Frobenius norm constraints (k?k) on U and R.2
? the constraints ? ? 0 and ?? ? S (a convex set) become
Applying change of variable ? = ? ?,
? 1) : ?? ? S},
(?, ?) ? N := cone{(?,
where cone stands for the conic hull (convex). Similarly we can dualize ?S (w) = max? ? 0 w??S (?),
where ?S (?) := maxw?S ? 0 w is the support function on S. Now swap minw with all the subsequent
max (strong duality), we arrive at a form where w can be minimized out analytically
h
min min
max max max min ?z0 R0 w + v0 R0 w ? G? (v)
(12)
E
? (?,?)?N v
w
kU k?1 kRk?1 (x,z)?p?
i
+ ? 0 w ? ?S (?) + (U x + ?w)0 (?w ? ?) (13)
h
= min min
max max max ?G? (v) ? ?S (?) ? ? 0 U x
(14)
E
? (?,?)?N v
kU k?1 kRk?1 (x,z)?p?
2
1
? 4?? kR(v ? z) + ?U x + ? ? ??k .
(15)
Given (U, R), the optimal (v, ?, ?, ?) can be efficiently solved through a concave maximization.
However the overall objective is not convex in (U, R) because the quadratic term in (15) is subtracted.
Fortunately it turns out not hard to tackle this issue by using semi-definite programming (SDP)
relaxation which linearizes the quadratic terms. In particular, let I be the identity matrix, and define
?
?
!
!
M1 Mu
Mr
I
I
U
R
0 ?
M := M (U, R) := U 0 (I, U, R) = U 0 U 0 U U 0 R =: ?Mu0 Mu,u Mr,u
. (16)
R0
R0 R0 U R0 R
Mr0 Mr,u Mr,r
Then ? 0 U x can be replaced by ? 0 Mu x and the quadratic term in (15) can be expanded as
f (M, ?, ?, ?, v; x, z) := tr(Mr,r (v ? z)(v ? z)0 ) + ? 2 tr(Mu,u xx0 ) + 2? tr(Mr,u x(v ? z)0 )
2
+ 2(? ? ??)0 (Mr (v ? z) + ?Mu x) + k? ? ??k .
(17)
Since given (?, ?, ?, v) the objective function becomes linear in M , so after maximizing out these
variables the overall objective is convex in M . Although this change of variable turns the objective
into convex, it indeed shifts the intractability into the feasible region of M :
M0 := {M 0 : M1 = I, tr(Mu,u ) ? 1, tr(Mr,r ) ? 1} ? {M : rank(M ) = h} .
(18)
|
{z
}
=:M1
1
If p(y|x) ? p0 (y) exp(?y0 U x ? ?2 kyk2 ) (for any ? > 0), then there is no need to add this ?2 kwk2 term.
In this case, all our subsequent developments apply directly. Therefore our approach applies to a broader setting
where L2 projection to S is tractable, but here we focus on PO-tractability just for the clarity of presentation.
2
To simplify the presentation, we bound the radius by 1 while in practice it is a hyperparameter to be tuned.
4
Here M 0 means M is real symmetric and positive semi-definite. Due to the rank constraint, M0
is not convex. So a natural relaxation?the only relaxation we introduce besides the proximal term in
(8)?is to drop this rank constraint and optimize with the resulting convex set M1 . This leads to the
final convex formulation:
h
i
min
E
M ?M1 (x,z)?p?
max max max ?G? (v) ? ?S (?) ? ? 0 Mu x ?
?
(?,?)?N
v
1
4?? f (M, ?, ?, ?, v; x, z)
. (19)
To summarize, we have achieved a convex model for two-layer conditional models in which the
latent structured representation is determined by a polar operator. Instead of bypassing this bi-level
optimization via the normal loss based approach [e.g., 19, 29], we addressed it directly by leveraging
the optimality conditions of the inner optimization. A convex relaxation is then achieved via SDP.
3.1
Inducing low-rank solutions of relaxation
Although it is generally hard to provide a theoretical guarantee for nonlinear SDP relaxations, it is
interesting to note that the constraint set M1 effectively encourages low-rank solutions (hence tighter
relaxations). As a key technical result, we next show that all extreme points of M1 are rank h (the
number of hidden nodes) for all h ? 2. Recall that in sparse coding, the atomic norm framework [30]
induces low-complexity solutions by setting up the optimization over the convex hull of atoms, or
penalize via its gauge function. Therefore the characterization of the extreme points of M1 might
open up the possibility of analyzing our relaxation by leveraging the results in sparse coding.
Lemma 1. Let Ai be symmetric matrices. Consider the set of
R := {X : X 0, tr(Ai X) S bi , i = 1, . . . , m},
(20)
where m is the number of linear (in)equality constraints. S means it can be any one of ?, =, or ?.
Then the rank r of all extreme points of R is upper bounded by
?
(21)
r ? 12 ( 8m + 1 ? 1) .
This result extends [31] by accommodating inequalities in (20), and its proof is given in Appendix A.
Now we show that the feasible region M1 as defined by (18) has all extreme points with rank h.
Theorem 1. If h ? 2, then all extreme points of M1 have rank h, and M1 is the convex hull of M0 .
Proof. Let M be an extreme point of M1 . Noting that M 0 already encodes the symmetry of M ,
the linear constraints for M1 in (18) can be written as 12 h(h + 1) linear equality constraints and two
linear inequality constraints. In total m = 12 h(h + 1) + 2. Plug it into (21) in the above lemma
k
?
j p
rank(M ) ? 12 ( 8m + 1 ? 1) = 12 ( 4h(h + 1) + 17 ? 1) = h + Jh = 1K.
(22)
Finally, the identity matrix in the top-left corner of M forces rank(M ) ? h. So rank(M ) = h for
all h ? 2. It then follows that M1 = convM0 .
4
Application in Machine Learning Problems
The framework developed above is generic. For example, when Y represents classification for h
classes by canonical vectors, S = convY is the h dimensional probability simplex (sum up to 1).
Clearly ?S (?) = maxi ?i , and N = {(x, t) ? Rh+1
: 10 x = t}. In many applications, Y can be
+
h
characterized as {y ? {0, 1} : Ay ? c}, where A is TUM and all entries of c are in {?1, 1, 0}.3 In
this case, the convex hull S has all extreme points being integral, and S employs an explicit form:
Y = {y ? {0, 1}h : Ay ? c}
=?
S = convY = {w ? [0, 1]h : Aw ? c},
(23)
replacing all binary constraints {0, 1} by intervals [0, 1]. Clearly TUM is a sufficient condition for
PO-tractability, because miny?Y d0 y is equivalent to minw?S d0 w, an LP. Examples include the
above graph matching and linear chain model. We will refer to Aw ? c as the non-box constraint.
4.1
Graph matching
As the first concrete example, we consider convex relaxation for latent graph matching. One task
in natural language is transliteration [12, 32]. Suppose we are given an English word e with m
letters, and a corresponding Hebrew word h with n letters. The goal is to predict whether e and h are
phonetically similar, a binary classification problem with z ? {?1, 1}. However it obviously helps to
3
For simplicity, we write equality constraints (handled separately in practice) using two inequality constraints.
5
find, as an intermediate step, the letter-wise matching between e and h. The underlying assumption is
that each letter corresponds to at most one letter in the word of the other language. So if we augment
both e and h with a sink symbol * at the end (hence making their length m
? := m + 1 and n
? := n + 1
?n
respectively), we would like to find a matching y ? {0, 1}m?
that minimizes the following cost
m
? X
n
?
X
? n
min
Yij u0 ?ij , where Y = {0, 1}m??
? {Y : Yi,: 1 = 1, ?i ? m, 10 Y:,j = 1, ?j ? n}. (24)
Y ?Y
|
{z
}
i=1 j=1
=:G
Here Yi,: is the i-th row of Y . ?ij ? Rp is a feature vector associated with the pair of i-th letter in e
and j-th letter in h, including the dummy *. Our notation omits its dependency on e and h. u is a
?
discriminative weight vector that will be learned from data. After finding the optimal
P Y? ,0 [12] uses
the maximal objective value of (24) to make the final binary prediction: ? sign( ij Yij u ?ij ).
To pose the problem in our framework, we first notice that the non-box constraints G in (24) are TUM.
? n
Therefore, S is simply [0, 1]m??
? G. Given the decoded w, the output labeling principle above
essentially duplicates u as the output layer weight. A key advantage of our method is to allow the
weights of the two layers to be decoupled. By using a weight vector r ? Rp , we define the output
score as r0 ?w, where ? is a p-by-m?
? n matrix whose (i, j)-th column is ?ij . So ? depends on e and
h. Overall, our model follows by instantiating (12) as:
h
min min
max max max min ?zr0 ?w + vr0 ?w ? G? (v) + ? 0 w
(25)
E
? (?,?)?N v?R w
kU k?1 kRk?1 (e,h,z)?p?
i
X
? ?S (?) +
(u0 ?ij + ?wij )(?wij ? ?ij ) . (26)
ij
2
Once more we can minimize out w, which gives rise to a quadratic k(v ? z)?0 r + ??0 u + ? ? ??k .
It is again amenable to SDP relaxation, where (Mu,u , Mr,u , Mr,r ) correspond to (uu0 , ru0 , rr0 ) resp.
4.2
Homogeneous temporal models
A variety of structured output problems are formulated with graphical models. We highlight the gist of
our technique by using a concrete example: unsupervised structured learning for inpainting. Suppose
we are given images of handwritten words, each segmented into p letters, and the latent representation
is the corresponding letters. Since letters are correlated in their appearance in words, the recognition
problem has long been addressed using linear chain conditional random fields. However imagine no
ground truth letter label is available, and instead of predicting labels, we are given images in which a
random small patch is occluded. So our goal will be inpainting the patches.
To cast the problem in our two-layer latent structure model, let each letter image in the word be denoted
as a vector xi ? Rn , and the reconstructed image be zi ? Rm (m = n here). Let Yi ? {0, 1}h?h
(h = 26) encode the labels of the letter pair at position i and i + 1 (as rows and columns of Yi
respectively). Let Uv ? Rh?n be the letter-wise discriminative weights, and Ue ? Rh?h be the
pairwise weights. Then by (2), the MAP inference can be reformulated as (ref. definition of H in (3))
Xp
Xp?1
min
10 Yi0 Uv xi +
tr(Ue0 Yi ) where Y = {Yi } : Yi ? {0, 1}C?C ? H. (27)
{Yi }?Y
i=1
i=1
Since the non-box
constraints in H are
TUM, the problem can be cast in our framework with
S = convY = {Yi } : Yi ? [0, 1]C?C ? H. Finally to reconstruct the image for each letter, we
assume that each letter j has a basis vector rj ? Rm . So given Wi , the output of reconstruction is
R0 Wi 1, where R = (r1 , . . . , rh )0 . To summarize, our model can be instantiated from (12) as
h
Xp
min min
max max max min
(vi ? zi )0 R0 Wi 1 ? G? (vi )
(28)
E
kU k?1 kRk?1 (x,z)?p?
?
(?,?)?N
v
+ tr(?0 W ) ? ?S (?) +
i=1
W
Xp
i=1
i
tr((Uv xi 10 + Ji 6= pK Ue + ?Wi )0 (?Wi ? ?i )) .
Here zi is the inpainted images in the training set. If no training image is occluded, then just set zi
to xi . The constraints on U and R can be refined, e.g. bounding kUv k, kUe k, and krj k separately.
2
As before, we can derive a quadratic term kR(vi ? zi )10 + ?Uv xi 10 + ?Ue + ?i ? ??i k by minimizing out Wi , which again leads to SDP relaxations. Even further, we may allow each letter to
employ a set of principal components, whose combination yields the reconstruction (Appendix B).
Besides modeling flexibility, our method also accommodates problem-specific simplification. For
example, the dimension of w is often much higher than the number of non-box constraints. Appendix
C shows that for linear chain, the dimension of w can be reduced from C 2 to C via partial Lagrangian.
6
5
Optimization
The key advantage of our convex relaxation (19) is that the inference depends on S (or equivalently
Y) only through the polar operator. Our overall optimization scheme is to perform projected SGD
over the function of M . This requires: a) given M , compute its objective value and gradient; and b)
project to M1 . We next detail the solution to the former, relegating the latter to Appendix D.
Given M , we optimize over (?, ?, ?, v) by projected LBFGS [33]. The objective is easy to compute
thanks to PO-tractability (for the ?S (?) term). The only nontrivial part is to project a point (?0 , ?0 )
to N , which is actually amenable to conditional gradient (CG). Formally it requires solving
2
min?,? 12 k? ? ?0 k + 12 (? ? ?0 )2 , s.t. ? = ?s, ? ? [0, C], s ? S.
(29)
p
4
2
2
W.l.o.g., we manually introduced an upper bound C := ?0 + k?0 k + ?0 on ?. At each iteration,
CG queries the gradient g? in ? and g? in ?, and solves the polar operator problem on N :
min???S,??[0,C] ? 0 g? +?g? = mins?S,??[0,C] ?s0 g? + ?g? = min{0, C mins?S (s0 g? +g? )}. (30)
So it boils down to the polar operator on S, and is hence tractable. If the optimal value in (30) is
nonnegative, then the current iterate is already optimal. Otherwise we add a basis (s? , 1) to the
ensemble and a totally corrective update can be performed by CG. More details are available in [34].
? , we recover the optimal w for each training example based on the
After finding the optimal M
optimal w in (12). Using it as the initial point, we locally optimize the two layer models U and R
based on (14).
6
Experimental Results
To empirically evaluate our convex method (henceforth referred to as CVX), we compared it with the
state-of-the-art methods on two prediction problems with latent structure.
Transliteration The first experiment is based on the English-Hebrew corpus [35]. It consists of
250 positive transliteration pairs for training, and 300 pairs for testing. On average there are 6
characters per word in each of the languages. All these pairs are considered ?positive examples",
and for negative examples we followed [12] and randomly sampled t? ? {50, 75, 100} pairs from
2502 ? 250 mismatched pairings (which are 20%, 30%, and 40% of 250, resp). We did not use many
negative examples because, as per [12], our test performance measure will depend mainly on the
highest few discriminative values, which are learned largely from the positive examples.
Given a pair of words (e, h), the feature representation ?ij for the i-th letter in e and j-th letter
in h is defined as the unigram feature: an n-dimensional vector with all 0?s except a single one
in the (ei , hj )-th coordinate. In this dataset, there are n = 655 possible letter pairs (* included).
Since our primary objective is to determine whether the convex relaxation of a two-layer model with
latent structure can outperform locally trained models, we adopted this simple but effective feature
representation (rather than delving into heuristic feature engineering).
Our test evaluation measurement is the Mean Reciprocal Rank (MRR), which is the average of the
reciprocal of the rank of the correct answer. In particular, for each English word e, we calculated the
discriminative score of respective methods when e is paired with each Hebrew word in the test set,
and then found the rank of the correct word (1 for the highest). The reciprocal of the rank is averaged
over all test pairs, giving the MRR. So a higher value is preferred, and 50% means on average the
true Hebrew word is the runner-up. For our method, the discriminative score is simply f := r0 ?w
(using the symbols in (25)), and that for [12] is f := maxY ?Y u0 ?vec(Y ) (vectorization of Y ).
We compared our method (with ? = 0.1) against the state-of-the-art approach in [12]. It is a special
case of our model with the second-layer weight r tied with the first-layer weight u. They trained it
using a local optimization method, and we will refer to it as Local. Both methods employ an output
loss function max{0, yf }2 with y ? {+1, ?1}, and both contain only one parameter?the bound on
kuk (and krk). We simply tuned it to optimize the performance of Local. The test MRR is shown in
Figure 1, where the number of negative examples was varied in 50, 75, and 100. Local was trained
with random initialization, and we repeated the random selection of the negative examples for 10
times, yielding 10 dots in each scatter plot. It is clear that CVX in general delivers significantly higher
MRR than Local, with the dots lying above or close to the diagonal. Since this dataset is not big, the
randomness of the negative set leads to notable variations in the performance (for both methods).
4
For ? to be optimal, we require (? ? ?0 )2 ? k? ? ?0 k2 + (? ? ?0 )2 ? k0 ? ?0 k2 + (0 ? ?0 )2 , i.e., ? ? C.
7
70
60
50
90
80
MRR of CVX
MRR of CVX
MRR of CVX
80
70
60
80
70
50
50
60
70
80
MRR of Local
(a) 50 negative examples
50
60
70
80
MRR of Local
(b) 75 negative examples
70
80
90
MRR of Local
(c) 100 negative examples
Figure 1: MRR of Local versus CVX over 50, 75, and 100 negative examples.
S IZE OF OCCLUDED PATCH (k ? k)
k=2
k=3
k=4
CRF-AE 0.29 ? 0.01 0.80 ? 0.01 1.31 ? 0.02
CVX
0.27 ? 0.01 0.79 ? 0.01 1.28 ? 0.02
L ENGTH OF SEQUENCE
p=4
p=6
p=8
CRF-AE 1.33 ? 0.04 1.30 ? 0.02 1.31 ? 0.03
CVX
1.29 ? 0.04 1.27 ? 0.02 1.28 ? 0.03
Table 1: Total inpainting error as a function of the Table 2: Total inpainting error as a function of the
size of occluded patch (p = 8).
length of sequences (k = 4).
Inpainting for occluded image Our second experiment used structured latent model to inpaint
images. We generated 200 sequences of images for training, each with p ? {4, 6, 8} digits. In order
to introduce structure, each sequence can be either odd (i.e. all digits are either 1 or 3) or even (all
digits are 2 or 4). So C = 4. Given the digit label, the corresponding image (x ? [0, 1]196 ) was
sampled from the MNIST dataset, downsampled to 14-by-14. 200 test sequences were also generated.
In the test data, we randomly set a k ? k patch of each image to 0 as occluded (k ? {2, 3, 4}), and the
task is to inpaint it. This setting is entirely unsupervised, with no digit label available for training. It
falls in the framework of X ? Y ? Z, where X is the occluded input, Y is the latent digit sequence,
and Z is the recovered image. In our convex method, we tied Uv with R and so we still have a 3-by-3
2
block matrix M , corresponding to I, Uv and Ue . We set ? to 10?1 and G(?) = 12 k?k (Gaussian). Y
was predicted using the polar operator, based on which Z was predicted with the Gaussian mean.
For comparison, we used CRF-AE, which was proposed very recently by [7]. Although it ties X and
Z, extension to our setting is trivial by computing the expected value of Z given X. Here P (Z|Y ) is
assumed a Gaussian whose mean is learned by maximizing P (Z = x|X = x), and we initialized all
model parameters by unit Gaussian. For the ease of comparison, we introduced regularization by
constraining model parameters to L2 norm balls rather than penalizing the squared L2 norm. For
both methods, the radius bound was simply chosen as the maximum L2 norm of the images, which
produced consistently good results. We did not use higher k because the images are sized 14-by-14.
The error of inpainting given by the two methods is shown in Table 1 where we varied the size of
the occluded patch with p fixed to 6, and in Table 2 where the length of the sequence p was varied
while k was fixed to 4. Each number is the sum of squared error in the occluded patch, averaged over
5 random generations of training and test data (hence producing the mean and standard deviation).
Here we can see that CVX gives lower error than CRF-AE. With no surprise, the error grows almost
quadratically in k. When the length of sequence grows, the error of both CVX and CRF-AE fluctuates
nonmonotonically. This is probably because with more images in each node, the total error is summed
over more images, but the error per image decays thanks to the structure.
7
Conclusion
We have presented a new formulation of two-layer models with latent structure, while maintaining
a jointly convex training objective. Its effectiveness is demonstrated by the superior empirical
performance over local training, along with low-rank characterization of the extreme points of the
feasible region. An interesting extension for future investigation is when the latent layer employs
submodularity, with its base polytope mirroring the support set S.
8
References
[1] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural
Computation, 18:1527?1554, 2006.
[2] L.-C. Chen, A. Schwing, A. Yuille, and R. Urtasun. Learning deep structured models. In ICML. 2015.
[3] V. Mnih, H. Larochelle, and G. E. Hinton. Conditional restricted Boltzmann machines for structured output
prediction. In AISTATS. 2011.
[4] M. Ratajczak, S. Tschiatschek, and F. Pernkopf. Sum-product networks for structured prediction: Contextspecific deep conditional random fields. In Workshop on Learning Tractable Probabilistic Models. 2014.
[5] K. Sohn, X. Yan, and H. Lee. Learning structured output representation using deep conditional generative
models. In NIPS. 2015.
[6] R. Collobert. Deep learning for efficient discriminative parsing. In ICML. 2011.
[7] W. Ammar, C. Dyer, and N. A. Smith. Conditional random field autoencoders for unsupervised structured
prediction. In NIPS. 2014.
[8] L. Xu, D. Wilkinson, F. Southey, and D. Schuurmans. Discriminative unsupervised learning of structured
predictors. In ICML. 2006.
[9] H. Daum? III. Unsupervised search-based structured prediction. In ICML. 2009.
[10] N. Smith and J. Eisner. Contrastive estimation: training log-linear models on unlabeled data. In ACL. 2005.
[11] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[12] M.-W. Chang, D. Goldwasser, D. Roth, and V. Srikumar. Discriminative learning over constrained latent
representations. In NAACL. 2010.
[13] M.-W. Chang, V. Srikumar, D. Goldwasser, and D. Roth. Structured output learning with indirect
supervision. In ICML. 2010.
[14] N. Chen, J. Zhu, F. Sun, and E. P. Xing. Large-margin predictive latent subspace learning for multiview
data analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(12):2365?2378, 2012.
[15] H. Larochelle and Y. Bengio. Classification using discriminative restricted Boltzmann machines. In ICML.
2008.
[16] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In
ICML. 2012.
[17] A. Gane, T. Hazan, and T. Jaakkola. Learning with random maximum a-posteriori perturbation models. In
AISTATS. 2014.
[18] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large
margin approach. In ICML. 2005.
[19] O. Aslan, X. Zhang, and D. Schuurmans. Convex deep learning via normalized kernels. In NIPS. 2014.
[20] Y. Bengio, N. L. Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In NIPS. 2005.
[21] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In
ICML. 2014.
[22] R. Livni, S. Shalev-Shwartz, and O. Shamir. An algorithm for training polynomial networks, 2014.
ArXiv:1304.7045v2.
[23] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of
Machine Learning Research, 6:1705?1749, 2005.
[24] A. Gotovos, H. Hassani, and A. Krause. Sampling from probabilistic submodular models. In NIPS. 2015.
[25] G. Haffari and A. Sarkar. Analysis of semi-supervised learning with Yarowsky algorithm. In UAI. 2007.
[26] L. Xu, M. White, and D. Schuurmans. Optimal reverse prediction: a unified perspective on supervised,
unsupervised and semi-supervised learning. In ICML. 2009.
[27] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127?152, 2005.
[28] O. Meshi, M. Mahdavi, and A. G. Schwing. Smooth and strong: Map inference with linear convergence.
In NIPS. 2015.
[29] G. Druck, C. Pal, X. Zhu, and A. McCallum. Semi-supervised classification with hybrid generative/discriminative methods. In KDD. 2007.
[30] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S.Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[31] G. Pataki. On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal
eigenvalues. Mathematics of Operations Research, 23(2):339?358, 1998.
[32] D. Goldwasser and D. Roth. Transliteration as constrained optimization. In EMNLP. 2008.
[33] http://www.cs.ubc.ca/~schmidtm/Software/minConf.html.
[34] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML. 2013.
[35] https://cogcomp.cs.illinois.edu/page/resource_view/2.
9
| 6314 |@word version:1 polynomial:2 seems:1 norm:6 yi0:2 open:1 p0:1 contrastive:3 q1:2 sgd:1 tr:11 inpainting:6 accommodate:1 initial:1 score:3 mi0:1 tuned:2 past:1 current:1 com:1 recovered:1 gmail:1 scatter:1 written:1 parsing:2 chicago:2 partition:3 subsequent:2 kdd:1 remove:1 designed:2 drop:1 gist:1 maxv:1 aside:1 update:1 generative:3 selected:1 plot:1 intelligence:1 parameterization:2 mccallum:1 reciprocal:3 smith:2 characterization:2 boosting:1 node:3 math:1 zhang:2 rc:2 along:2 direct:1 become:1 pairing:1 consists:2 introduce:2 pairwise:1 expected:1 indeed:1 growing:1 multi:1 sdp:5 globally:2 alberta:1 automatically:1 relegating:1 totally:2 becomes:1 spain:1 provided:1 underlying:2 bounded:3 notation:1 project:2 minimizes:1 developed:1 unified:1 ghosh:1 unobserved:1 finding:3 bootstrapping:1 nj:1 guarantee:1 temporal:2 concave:1 unimodular:1 tackle:1 tie:1 rm:2 k2:2 yarowsky:1 unit:1 producing:1 positive:5 before:1 engineering:1 local:10 consequence:1 analyzing:1 meet:1 might:1 acl:1 initialization:1 equivalence:1 challenging:1 ease:1 limited:1 tschiatschek:1 bi:8 statistically:1 averaged:2 faithful:1 unique:1 zhangx:1 atomic:1 practice:4 testing:1 definite:3 block:1 digit:6 area:1 empirical:2 yan:1 significantly:1 matching:13 projection:2 word:13 regular:2 downsampled:1 close:1 selection:1 operator:10 unlabeled:1 context:1 applying:1 optimize:4 equivalent:2 map:9 demonstrated:3 imposed:1 maximizing:2 lagrangian:1 roth:3 www:1 convex:37 simplicity:2 recovery:1 roux:1 coordinate:1 variation:1 resp:2 imagine:1 play:1 suppose:3 shamir:1 zr0:1 programming:2 homogeneous:1 us:1 origin:1 wolfe:1 recognition:2 srikumar:2 convexification:1 observed:3 role:1 taskar:1 solved:2 capture:2 dualized:1 revisiting:1 region:3 connected:1 sun:1 highest:2 yk:3 substantial:1 convexity:2 mu:8 miny:7 complexity:1 xinhua:1 wilkinson:1 occluded:9 nesterov:1 ultimately:1 trained:4 depend:1 solving:4 harmonium:1 predictive:4 yuille:1 upon:1 bipartite:1 efficiency:1 basis:3 completely:1 swap:1 sink:1 resolved:1 po:8 joint:2 k0:1 indirect:1 represented:2 corrective:1 instantiated:1 fast:1 effective:2 inpainted:1 query:1 labeling:1 gotovos:1 shalev:1 refined:3 whose:3 heuristic:1 fluctuates:1 solve:2 valued:2 delalleau:1 reconstruct:2 otherwise:2 encoder:3 uic:1 transductive:1 jointly:2 itself:1 final:2 obviously:1 descriptive:1 advantage:2 sequence:8 net:1 eigenvalue:1 propose:1 reconstruction:2 product:3 maximal:1 junfeng:1 iff:1 flexibility:4 achieve:1 representational:1 frobenius:1 inducing:1 scalability:1 convergence:1 regularity:1 extending:1 r1:1 leave:1 help:1 derive:1 develop:2 fixing:1 pose:2 ij:9 odd:1 sole:1 solves:1 strong:2 predicted:2 c:2 larochelle:2 submodularity:1 radius:2 correct:2 hull:5 meshi:1 require:2 preliminary:1 investigation:1 tighter:1 yij:3 extension:4 m0i:1 strictly:1 bypassing:1 lying:1 considered:1 ground:1 normal:1 exp:6 mapping:1 bj:4 predict:1 m0:3 major:1 entropic:1 uniqueness:1 estimation:6 polar:10 label:8 ength:1 waterloo:2 faithfully:1 gauge:1 minimization:1 concurrently:1 clearly:3 always:2 gaussian:4 rather:3 hj:1 broader:1 jaakkola:2 encode:4 focus:1 consistently:1 rank:18 likelihood:1 mainly:1 normalizer:1 cg:3 summarizing:1 posteriori:3 inference:13 dependent:1 chatalbashev:1 yaoliang:2 hidden:1 koller:1 expand:1 wij:2 arg:7 uncovering:1 aforementioned:1 overall:5 among:1 dual:2 denoted:1 development:1 classification:4 augment:1 special:2 art:2 summed:1 marginal:2 field:5 construct:1 once:1 constrained:2 sampling:3 atom:1 manually:1 represents:1 yu:2 unsupervised:9 icml:11 future:1 minimized:1 others:1 np:1 simplify:1 simplex:1 employ:4 wen:1 duplicate:1 randomly:2 few:1 composed:1 dg:2 divergence:5 homogeneity:1 replaced:1 geometry:1 ab:1 highly:2 possibility:1 mnih:1 evaluation:1 runner:1 truly:1 extreme:10 admitting:1 yielding:1 primal:1 semidefinite:1 regularizers:1 chain:5 amenable:2 accurate:2 cogcomp:1 bregman:3 edge:6 integral:1 partial:1 minw:2 respective:1 vi0:1 unless:2 decoupled:1 plugged:1 initialized:1 theoretical:1 fenchel:1 column:3 modeling:3 soft:2 markovian:1 obstacle:1 witnessed:1 disadvantage:1 retains:1 maximization:1 applicability:1 tractability:10 vertex:1 addressing:1 introducing:1 entry:1 predictor:2 cost:1 deviation:1 osindero:1 pal:1 optimally:1 encoders:2 dependency:2 aw:2 answer:1 proximal:2 thanks:3 recht:1 automating:1 standing:1 retain:1 participates:1 probabilistic:2 lee:1 concrete:2 druck:1 again:2 central:1 postulate:1 squared:2 emnlp:1 henceforth:1 corner:1 expert:1 wing:1 style:1 leading:1 mahdavi:1 potential:4 parrilo:1 coding:2 notable:1 depends:3 collobert:1 vi:13 later:1 vr0:1 performed:1 closed:1 optimistic:1 kwk:1 hazan:2 xing:1 recover:3 option:1 sort:1 minimize:1 il:1 ni:1 accuracy:1 phonetically:1 merugu:1 largely:1 efficiently:3 maximized:1 yield:2 correspond:1 ensemble:1 vp:1 generalize:1 html:1 handwritten:1 vincent:1 produced:1 none:1 randomness:1 definition:1 against:1 rbms:1 acquisition:1 intentionally:1 obvious:1 proof:2 mi:3 recovers:1 associated:1 boil:1 sampled:2 xx0:1 dataset:3 intrinsically:1 recall:1 knowledge:1 segmentation:1 hassani:1 carefully:1 actually:1 tum:5 higher:5 supervised:5 improved:2 formulation:6 box:4 strongly:1 just:2 correlation:1 autoencoders:1 replacing:1 dualize:1 nonlinear:4 overlapping:1 ei:1 banerjee:1 mode:3 logistic:1 yf:1 schmidtm:1 grows:2 usa:1 vignesh:1 normalized:1 true:2 contain:2 unbiased:1 inductive:1 hence:6 regularization:2 multiplier:1 analytically:1 q0:4 symmetric:2 equality:3 dhillon:1 former:1 white:1 kyk2:1 encourages:1 ue:4 essence:1 unimodularity:2 ay:2 crf:7 demonstrate:2 necessitating:1 multiview:1 delivers:1 image:19 wise:4 recently:5 common:2 superior:1 ji:1 empirically:1 extend:1 m1:15 kwk2:1 significant:1 refer:2 measurement:1 rr0:1 ai:6 vec:1 uv:6 resorting:1 consistency:1 similarly:1 mathematics:2 illinois:2 submodular:1 language:5 dot:2 supervision:1 v0:3 base:3 add:3 jaggi:1 mrr:11 perspective:1 optimizing:1 irrelevant:1 optimizes:2 kuv:1 reverse:1 certain:1 nonconvex:1 inequality:3 binary:4 yi:25 preserving:1 guestrin:1 fortunately:1 relaxed:2 mr:10 employed:1 r0:28 surely:1 determine:1 transliteration:5 semi:6 u0:3 interdependency:1 rj:1 pataki:1 d0:5 smooth:4 technical:1 match:1 characterized:1 plug:1 segmented:1 long:2 jy:2 paired:1 prediction:14 variant:1 instantiating:1 ae:7 essentially:1 arxiv:1 iteration:1 normalization:2 kernel:2 achieved:7 penalize:1 justified:1 background:1 addition:1 separately:2 krause:1 addressed:3 interval:1 unlike:1 sure:1 probably:1 induced:1 subject:1 flow:1 leveraging:2 effectiveness:1 linearizes:1 marcotte:1 leverage:1 noting:2 intermediate:1 constraining:1 easy:1 iii:1 bengio:2 variety:2 marginalization:1 iterate:1 zi:5 inner:5 idea:1 goldwasser:3 multiclass:1 tradeoff:1 shift:1 jxk:1 whether:2 handled:1 render:1 reformulated:1 deep:11 mirroring:1 generally:3 useful:1 tij:1 involve:1 clear:1 amount:1 locally:2 induces:1 sohn:1 imputed:1 reduced:1 http:2 outperform:1 canonical:3 notice:2 sign:1 dummy:1 per:3 discrete:3 hyperparameter:1 promise:1 write:1 key:4 drawn:1 clarity:2 neither:1 penalizing:1 kuk:1 v1:1 graph:9 relaxation:17 sum:5 cone:2 inverse:1 letter:21 everywhere:1 arrive:1 almost:4 family:3 extends:1 chandrasekaran:1 patch:7 cvx:10 appendix:4 entirely:1 layer:28 hi:1 bound:6 followed:1 simplification:1 quadratic:6 replaces:1 oracle:2 nonnegative:1 nontrivial:1 constraint:22 software:1 encodes:1 optimality:5 min:33 expanded:1 structured:19 combination:1 ball:1 remain:1 em:2 y0:12 character:1 wi:6 lp:2 making:2 maxy:4 invariant:2 restricted:2 multiplicity:1 turn:2 dyer:1 tractable:8 ge:1 end:1 adopted:1 pursuit:1 available:6 operation:1 apply:1 v2:2 enforce:1 generic:1 inpaint:2 subtracted:1 rp:2 top:1 clustering:1 include:1 graphical:3 uu0:1 maintaining:1 yx:4 daum:1 giving:1 eisner:1 objective:11 already:2 costly:1 primary:1 diagonal:1 surrogate:1 amongst:1 gradient:7 subspace:1 accommodates:1 w0:1 accommodating:1 polytope:1 trivial:3 urtasun:1 enforcing:1 provable:1 willsky:1 assuming:2 besides:2 length:4 index:1 relationship:1 mini:1 minimizing:2 hebrew:4 equivalently:2 unfortunately:2 frank:1 negative:9 rise:1 design:1 issue:1 boltzmann:2 unknown:1 perform:1 allowing:1 upper:3 teh:1 observation:1 immediate:1 extended:1 hinton:3 rn:1 perturbation:4 varied:3 pernkopf:1 canada:2 inferred:2 sarkar:1 introduced:3 pair:10 required:1 cast:2 optimized:1 omits:1 learned:3 quadratically:1 barcelona:1 nip:7 haffari:1 pattern:1 challenge:2 summarize:2 program:2 built:1 max:29 including:1 belief:1 demanding:2 natural:4 rely:1 force:1 predicting:1 solvable:1 indicator:1 hybrid:1 zhu:2 scheme:1 conic:1 arora:1 auto:5 l2:5 ammar:1 loss:5 highlight:2 sublinear:1 interesting:2 generation:1 aslan:1 versus:1 southey:1 foundation:1 affine:2 sufficient:2 xp:5 s0:2 principle:3 intractability:1 row:3 changed:1 free:1 english:3 ize:1 contextspecific:1 weaker:2 jh:1 allow:2 mismatched:1 fall:2 livni:1 sparse:3 dimension:2 calculated:1 stand:1 adopts:1 made:2 commonly:1 projected:2 far:1 transaction:1 reconstructed:1 preferred:1 confirm:1 global:1 uai:1 corpus:1 assumed:2 discriminative:13 xi:5 shwartz:1 search:1 latent:28 vectorization:1 decade:1 table:4 ku:5 delving:1 ca:2 symmetry:1 schuurmans:3 investigated:1 necessarily:2 krk:6 did:2 pk:1 aistats:2 rh:5 uwaterloo:1 motivation:1 bounding:2 big:1 repeated:1 ref:1 body:1 augmented:2 xu:2 referred:1 edmonton:1 fashion:2 position:1 decoded:1 explicit:3 exponential:3 tied:3 bhaskara:1 z0:7 theorem:1 down:1 specific:2 unigram:1 symbol:3 maxi:1 decay:1 admits:1 intractable:2 workshop:1 mnist:1 adding:2 effectively:1 kr:2 margin:5 demand:1 chen:2 easier:1 surprise:1 naacl:1 simply:4 likely:1 appearance:1 lbfgs:1 lagrange:1 contained:1 maxw:1 applies:1 chang:2 corresponds:1 minimizer:2 determines:1 relies:1 truth:1 ma:1 ubc:1 conditional:16 goal:5 formulated:2 identity:2 presentation:2 sized:1 feasible:3 hard:7 change:3 included:1 specifically:1 determined:2 except:1 schwing:2 lemma:2 principal:1 total:6 duality:1 experimental:3 indicating:1 formally:1 support:6 latter:3 accelerated:1 evaluate:1 mcmc:1 trainable:1 correlated:2 |
5,875 | 6,315 | Deep Learning Games
Dale Schuurmans?
Google
[email protected]
Martin Zinkevich
Google
[email protected]
Abstract
We investigate a reduction of supervised learning to game playing that reveals new
connections and learning methods. For convex one-layer problems, we demonstrate
an equivalence between global minimizers of the training problem and Nash
equilibria in a simple game. We then show how the game can be extended to general
acyclic neural networks with differentiable convex gates, establishing a bijection
between the Nash equilibria and critical (or KKT) points of the deep learning
problem. Based on these connections we investigate alternative learning methods,
and find that regret matching can achieve competitive training performance while
producing sparser models than current deep learning strategies.
1
Introduction
In this paper, we investigate a new approach to reducing supervised learning to game playing. Unlike
well known reductions [8, 29, 30], we avoid duality as a necessary component in the reduction,
which allows a more flexible perspective that can be extended to deep models. An interesting finding
is that the no-regret strategies used to solve large-scale games [35] provide effective stochastic
training methods for supervised learning problems. In particular, regret matching [12], a step-size
free algorithm, appears capable of efficient stochastic optimization performance in practice.
A central contribution of this paper is to demonstrate how supervised learning of a directed acyclic
neural network with differentiable convex gates can be expressed as a simultaneous move game with
simple player actions and utilities. For variations of the learning problem (i.e. whether regularization
is considered) we establish connections between the critical points (or KKT points) and Nash
equilibria in the corresponding game. As expected, deep learning games are not simple, since even
approximately training deep models is hard in the worst case [13]. Nevertheless, the reduction reveals
new possibilities for training deep models that have not been previously considered. In particular, we
discover that regret matching with simple initialization can offer competitive training performance
compared to state-of-the-art deep learning heuristics while providing sparser solutions.
Recently, we have become aware of unpublished work [2] that also proposes a reduction of supervised
deep learning to game playing. Although the reduction presented in this paper was developed
independently, we acknowledge that others have also begun to consider the connection between deep
learning and game theory. We compare these two specific reductions in Appendix J, and outline the
distinct advantages of the approach developed in this paper.
2
One-Layer Learning Games
We start by considering the simpler one-layer case, which allows us to introduce the key concepts
that will then be extended to deep models. Consider the standard supervised learning problem where
one is given a set of paired data {(xt , yt )}Tt=1 , such that (xt , yt ) ? X ? Y, and wishes to learn a
?
Work performed at Google Brain while on a sabbatical leave from the University of Alberta.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
predictor h : X ? Y. For simplicity, we assume X = Rm and Y = Rn . A standard generalized linear
model can be expressed as h(x) = ?(?x) for some output transfer function ? : Rn ? Rn and matrix
? ? Rn?m denoting the trainable parameters of the model. Despite the presence of the transfer
function ?, such models are typically trained by minimizing an objective that is convex in z = ?x.
OLP (One-layer Learning Problem) Given a loss function ? : Rn ? Rn ? R that is convex in
the first argument, let ?t (z) = ?(z, yt ) and Lt (?) = ?t (?xt ). The training problem is to minimize
PT
L(?) = T ?1 t=1 Lt (?) with respect to the parameters ?.
We first identify a simple game whose Nash equilibria correspond to global minima of the one-layer
learning problem. This basic relationship establishes a connection between supervised learning and
game playing that we will exploit below. Although this reduction is not a significant contribution by
itself, the one-layer case allows us to introduce some key concepts that we will deploy later when
considering deep neural networks. A one-shot simultaneous move game is defined by specifying: a
set of players, a set of actions for each player, and a set of utility functions that specify the value to
each player given a joint action selection [36, Page 9] (also see Appendix E). Corresponding to the
OLP specified above, we propose the following game.
OLG (One-layer Learning Game) There are two players, a protagonist p and an antagonist a. The
protagonist chooses a parameter matrix ? ? Rm?n . The antagonist chooses a set of T vectors
n
and scalars {at , bt }Tt=1 , at ? Rn , bt ? R, such that a?
t z + bt ? ?t (z) for all z ? R ; that is, the
antagonist chooses an affine minorant of the local loss for each training example. Both players make
their action choice without knowledge of the other player?s choice. Given a joint action selection
PT
(?, {at , bt }) we define the utility of the antagonist as U a = T ?1 t=1 a?
t ?xt + bt , and the utility of
the protagonist as U p = ?U a . This is a two-person zero-sum game with continuous actions.
A Nash equilibrium is defined by a joint assignment of actions such that no player has any incentive
to deviate. That is, if ? p = ? denotes the action choice for the protagonist and ? a = {at , bt } the
choice for the antagonist, then the joint action ? = (? p , ? a ) is a Nash equilibrium if U p (?
?p , ?a ) ?
p p
a
p
a p
a
a p
a
a
U (? , ? ) for all ?
? , and U (? , ?
? ) ? U (? , ? ) for all ?
? .
Using this characterization one can then determine a bijection between the Nash equilibria of the
OLG and the global minimizers of the OLP.
Theorem 1 (1) If (?? , {at , bt }) is a Nash equilibrium of the OLG, then ?? must be a global minimum
of the OLP. (2) If ?? is a global minimizer of the OLP, then there exists an antagonist strategy {at , bt }
such that (?? , {at , bt }) is a Nash equilibrium of the OLG. (All proofs are given in the appendix.)
Thus far, we have ignored the fact that it is important to control model complexity to improve
generalization, not merely minimize the loss. Although model complexity is normally controlled by
regularizing ?, we will find it more convenient to equivalently introduce a constraint ? ? ? for some
convex set ? (which we assume satisfies an appropriate constraint qualification; see Appendix C).
The learning problem and corresponding game can then be modified accordingly while still preserving
the bijection between their solution concepts.
OCP (One-layer Constrained Learning Problem) Add optimization constraint ? ? ? to the OLP.
OCG (One-layer Constrained Learning Game) Add protagonist action constraint ? ? ? to OLG.
Theorem 2 (1) If (?? , {at , bt }) is a Nash equilibrium of the OCG, then ?? must be a constrained
global minimum of the OCP. (2) If ?? is a constrained global minimizer of the OCP, then there exists
an antagonist strategy {at , bt } such that (?? , {at , bt }) is a Nash equilibrium of the OCG.
2.1
Learning Algorithms
The tight connection between convex learning and two-person zero-sum games raises the question of whether techniques for finding Nash equilibria might offer alternative training approaches.
Surprisingly, the answer appears to be yes.
There has been substantial progress in on-line algorithms for finding Nash equilibria, both in theory
[5, 24, 34] and practice [35]. In the two-person zero-sum case, large games are solved by pitting two
regret-minimizing learning algorithms against each other, exploiting the fact that when both achieve
a regret rate of ?/2, their respective average strategies form an ?-Nash equilibrium [38]. For the game
as described above, where the protagonist action is ? ? ? and the antagonist action is denoted ?a ,
2
(k)
we imagine playing in rounds, where on round k the joint action is denoted by ? (k) = (?(k) , ?a ).
Since the utility function for each player U i for i ? {p, a}, is affine in their own action choice for
any fixed action chosen by the other player, each faces an online convex optimization problem [37]
(note that maximizing U i is equivalent to minimizing ?U i ; see also Appendix G). The total regret
of a player, say the protagonist, is defined with respect to their utility function after K rounds as
PK
(k)
(k)
Rp (? (1) . . . ? (K) ) = max??? k=1 U p (?, ?a ) ? U p (?(k) , ?a ). (Nature can also be introduced
to choose a random training example on each round, which simply requires the definition of regret to
be expressed in terms of expectations over nature?s choices.)
To accommodate regularization in the learning problem, we impose parameter constraints ?. A
particularly interesting case occurs when one defines ? = {? : k?k1 ? ?}, since the L1 ball
constraint is equivalent to imposing L1 regularization. There are two distinct advantages to L1
regularization in this context. First, as is well known, L1 encourages sparsity in the solution. Second,
and much less appreciated, is the fact that any polytope constraint allows one to reduce the constrained
online convex optimization problem to learning from expert advice over a finite number of experts
[37]: Given a polytope ?, define the convex hull basis H(?) to be a matrix whose columns are the
vertices in ?. An expert can then be assigned to each vertex in H(?), and an algorithm for learning
from expert advice can then be applied by mapping its strategy on round k, ?(k) (a probability
distribution over the experts), back to an action choice in the original problem via ?(k) = H(?)?(k) ,
while the utility vector on round k, u(k) , can be passed back to the experts via H(?)? u(k) [37].
Since this reduction allows any method for learning from expert advice to be applied to L1 constrained
online convex optimization, we investigated whether alternative algorithms for supervised training
might be uncovered. We considered two algorithms for learning from expert advice: the normalized
exponentiated weight algorithm (EWA) [22, 32] (Algorithm 3); and regret matching (RM), a
simpler method from the economics and game theory literature [12] (Algorithm 2). For supervised
learning, these algorithms operate by using a stochastic sample of the gradient to perform their
updates (outer loop Algorithm 1). EWA possesses superior regret bounds that demonstrate only a
logarithmic dependence on the number of actions; however RM is simpler, hyperparameter-free, and
still possesses reasonable regret bounds [9, 10]. Although exponentiated gradient methods have been
applied to supervised learning [18, 32], we not aware of any previous attempt to apply regret matching
to supervised training. We compared these to projected stochastic gradient descent (PSGD), which
is the obvious modification of stochastic gradient descent (SGD) that retains a similar regret bound
[7, 28] (Algorithm 4).
2.2
Evaluation
To investigate the utility of these methods for supervised learning, we conducted experiments on
synthetic data and on the MNIST data set [20]. Note that PSGD and EWA have a step size parameter,
? (k) , that greatly affects their performance. The best regret bounds are achieved for step sizes of the
form ?k ?1/2 and ? log(m)k ?1/2 respectively [28]; we also tuned ? to generate the best empirical
results. Since the underlying optimization problems are convex, these experiments merely focus on
the speed of convergence to a global minimum of the constrained training problem.
The first set of experiments considered synthetic problems. The data dimension was set to m = 10,
and T = 100 training points were drawn from a standard multivariate Gaussian. For univariate
prediction, a random hyperplane was chosen to label the data (hence the data was linearly separable,
but not with a large margin). The logistic training loss achieved by the running average of the
protagonist strategy ?? over the entire training set is plotted in Figure 1a. For multivariate prediction, a
4?10 target matrix, ?? , was randomly generated to label training data by arg max(?? xt ). The training
softmax loss achieved by the running average of the protagonist strategy ?? over the entire training
set is shown in Figure 1b. The third experiment was conducted on MNIST, which is an n = 10
class problem over m = 784 dimensional inputs with T = 60, 000 training examples, evidently not
linearly separable. For this experiment, we used mini-batches of size 100. The training loss of the
running average protagonist strategy ?? (single run) is shown in Figure 1c. The apparent effectiveness
of RM in these experiments is a surprising outcome. Even after tuning ? for both PSGD and EWA,
they do not surpass the performance of RM, which is hyperparameter free. We did not anticipate this
observation; the effectiveness of RM for supervised learning appears not to have been previously
noticed. (We do not expect RM to be competitive in high dimensional sparse problems, since its
regret bound has a square root and not a logarithmic dependence on n [9].)
3
(a) Logistic loss, synthetic data.
(b) Softmax loss, synthetic data.
(c) Softmax loss, MNIST data.
Figure 1: Training loss achieved by different no-regret algorithms. Subfigures (a) and (b) are averaged
over 100 repeats, log scale x-axis. Subfigure (c) is averaged over 10 repeats (psgd theory off scale).
3
Deep Learning Games
A key contribution of this paper is to show how the problem of training a feedforward neural network
with differentiable convex gates can be reduced to a game. A practical consequence of this reduction
is that it suggests new approaches to training deep models that are inspired by methods that have
recently proved successful for solving massive-scale games.
Feedforward Neural Network A feedforward neural network is defined by a directed acyclic graph
with additional objects attached to the vertices and edges. The network architecture is specified by
N = (V, E, I, O, F ), where V is a set of vertices, E ? V ? V is a set of edges, I = {i1 . . . im } ? V
is a set of input vertices, O = {o1 . . . on } ? V is a set of output vertices, and F = {fv : v ? V } is a
set of activation functions, where fv : R ? R. The trainable parameters are given by ? : E ? R.
In the graph defined by G = (V, E), a path (v1 , ..., vk ) consists of a sequence of vertices such that
(vj , vj+1 ) ? E for all j. A cycle is a path where the first and last vertex are equal. We assume that G
contains no cycles, the input vertices have no incoming edges (i.e. (u, i) 6? E for all i ? I, u ? V ),
and the output vertices have no outgoing edges (i.e. (o, v) 6? E for all o ? O, v ? V ). A directed
acyclic graph generates a partial order ? on the vertices where u ? v if and only if there is a path from
u to v. For all v ? V , define Ev = {(u, u? ) ? E : u? = v}. The network is related to the training
data by assuming |I| = m, the number of input vertices corresponds to the number of input features,
and |O| = n, the number of output vertices corresponds to the number of output dimensions. It is a
good idea (but not required) to have two additional bias inputs, whose corresponding input features
are always set to 0 and 1, respectively, and have edges to all non-input nodes in the graph. Usually,
the activation functions on input and output nodes are the identity, i.e. fv (x) = x for v ? I ? O.
Given a training input xt ? Rm , the computation of the network N is expressed by a circuit value
function ct that assigns values to each vertex based on the partial order over vertices:
P
ct (ik , ?) = fik (xtk ) for ik ? I; ct (v, ?) = fv
u:(u,v)?E ct (u, ?)?(u, v) for v ? V ? I. (1)
Let ct (o, ?) denote the vector of values at the output vertices, i.e. (ct (o, ?))k = ct (ok , ?). Since each
fv is assumed differentiable, the output ct (o, ?) must also be differentiable with respect to ?.
When we wish to impose constraints on ? we assume the constraints factor over vertices, and are
applied across the incoming edges to each vertex. That is, for each
Qv ? V ? I the parameters ?
restricted to Ev are required to be in a set ?v ? REv , and ? = v?V ?I ?v . (We additionally
assume each ?v satisfies constraint qualifications?see Appendix C?and can also alter the factorization requirement to allow more complex network architectures?see Appendix H). If ? = RE , we
consider the network to be unconstrained. If ? is bounded, we consider the network to be bounded.
DLP (Deep Learning Problem) Given a loss function ?(z, y) that is convex in the first argument
satisfying 0 ? ?(z, y) < ? for all z ? Rn , define ?t (z) = ?(z, yt ) and Lt (?) = ?t (ct (o, ?)). The
PT
training problem is to find a ? ? ? that minimizes L(?) = T ?1 t=1 Lt (?).
DLG (Deep Learning Game) We define a one-shot simultaneous move game [36, page 9] with
infinite action sets (Appendix E); we need to specify the players, action sets, and utility functions.
4
Players: The players consist of a protagonist p for each v ? V ? I, an antagonist a, and a set
of self-interested zannis sv , one for each vertex v ? V .2 Actions: The protagonist for vertex v
chooses a parameter function ?v ? ?v . The antagonist chooses a set of T vectors and scalars
n
{at , bt }Tt=1 , at ? Rn , bt ? R, such that a?
t z + bt ? ?t (z) for all z ? R ; that is, the antagonist
chooses an affine minorant of the local loss for each training example. Each zanni sv chooses
a set of 2T scalars (qvt , dvt ), qvt ? R, dvt ? R, such that qvt z + dvt ? fv (z) for all z ? R;
that is, the zanni chooses an affine minorant of its local activation function fv for each training
example. All players make their action choice without knowledge of the other player?s choice.
Utilities: For a joint action ? = (?, {at , bt }, {qvt , dvt }), the zannis? utilities are defined recursively
following the parial order on vertices. First, for each i ? I the utility for zanni si on training
example t is Uits (?) = ditP+ qit xit , and for each v ? V ? I the utility for zanni sv on example
s
s
t is Uvt
(?) = dvt + qvt u:(u,v)?E Utu
(?)?(u, v). The total utility for each zanni sv is given
P
T
s
s
V . The utility for the antagonist a is then given by U a =
by Uv (?) =
t=1 Uvt (?) for v ? P
PT
n
a
?1
a
s
T
t=1 Ut where Ut (?) = bt +
k=1 akt Uok t (?). The utility for all protagonists are the same,
p
a
U (?) = ?U (?). (This representation also allows for an equivalent game where nature selects an
example t, tells the antagonist and the zannis, and then everyone plays their actions simultaneously.)
The next lemma shows how the zannis and the antagonist can be expected to act.
Lemma 3 Given a fixed protagonist action ?, there exists a unique joint action for all agents
? = (?, {at , bt }, {qvt , dvt }) where the zannis and the antagonist are playing best responses to ?.
Moreover, U p (?) = ?L(?), ?? U p (?) = ??L(?), and given some protagonist at v ? V ? I, if we
hold all other agents? strategies fixed, U p (?) is an affine function of the strategy of the protagonist at
v. We define ? as the joint action expansion for ?.
There is more detail in the appendix about the joint action expansion. However, the key point is that
if the current cost and partial derivatives can be calculated for each parameter, one can construct the
affine function for each agent. We will return to this in Section 3.1.
A KKT point is a point that satisfies the KKT conditions [15, 19]: roughly, that either it is a critical
point (where the gradient is zero), or it is a point on the boundary of ? where the gradient is pointing
out of ? ?perpendicularly? (see Appendix C). We can now state the main theorem of the paper,
showing a one to one relationship between KKT points and Nash equilibria.
Theorem 4 (DLG Nash Equilibrium) The joint action ? = (?, {at , bt }, {qvt , dvt }) is a Nash equilibrium of the DLG iff it is the joint action expansion for ? and ? is a KKT point of the DLP.
Corollary 5 If the network is unbounded, the joint action ? = (?, {at , bt }, {qvt , dvt }) is a Nash
equilibrium of the DLG iff it is the joint action expansion for ? and ? is a critical point of the DLP.
Finally we note that sometimes we need to add constraints between edges incident on different
nodes. For example, in a convolutional neural network, one will have edges e = {u, v} and
e? = {u? , v ? } such that there is a constraint ?e = ?e? (see Appendix H). In game theory, if two agents
act simultaneously it is difficult to have one agent?s viable actions depend on another agent?s action.
Therefore, if parameters are constrained in this manner, it is better to have one agent control both.
The appendix (beginning with Appendix B) extends our model and theory to handle such parameter
tying, which allows us to handle both convolutional networks and non-convex activation functions
(Appendix I). Our theory does not apply to non-smooth activation functions, however (e.g. ReLU
gates), but these can be approximated arbitrarily closely by differentiable activations.
3.1
Learning Algorithms
Characterizing the deep learning problem as a game motivates the consideration of equilibrium
finding methods as potential training algorithms. Given the previous reduction to expert algorithms,
we will consider the use of the L1 ball constraint ?v = {?v : k?v k1 ? ?} at each vertex v. For deep
learning, we have investigated a simple approach by training independent protagonist agents at each
vertex against a best response antagonist and best response zannis [14]. In this case, it is possible
2
Nomenclature explanation: Protagonists nominally strive toward a common goal, but their actions can
interfere with one another. Zannis are traditionally considered servants, but their motivations are not perfectly
aligned with the protagonists. The antagonist is diametrically opposed to the protagonists.
5
Algorithm 1 Main Loop
Algorithm 2 Regret Matching (RM)
On round k, observe some xt (or mini batch)
Antagonist and zannis choose best responses
(k)
which ensures ?Uvp (?v ) = ??L(?v )
(k)
p
gv ? ?Uv (?v )
(k)
(k)
(k)
Apply update to rv , ?v and ?v ?v ? V
(k+1)
(k+1)
?v
Algorithm 3 Exp. Weighted Average (EWA)
(k+1)
(k)
(k)
(k)
? rv + H(?v )? gv ?
(k) ?
? (k)
?v H(?v) gv
(k+1)
(k+1)
(k+1)
?v
? rv
/ 1? rv
+
+
rv
(k+1)
? H(?v )?v
Algorithm 4 Projected SGD
(k+1)
(k)
rv
? rv + ? (k) H(?v )? gv
(k+1)
(k+1)
(k+1)
?v
? exp(rv
)/(1? exp(rv
))
(k+1)
(k+1)
?v
? H(?v )?v
(k)
(k)
rv
? rv + ? (k) H(?v )? gv
(k+1)
(k+1)
?v
? L2 _project_to_simplex(rv
)
(k+1)
(k+1)
?v
? H(?v )?v
to devise interesting and novel learning strategies based on the algorithms for learning from expert
advice. Since the optimization problem is no longer convex in a local protagonist action ?v , we do
not expect convergence to a joint, globally optimal strategy among protagonists. Nevertheless, one
can develop a generic approach for using the game to generate a learning algorithm.
Algorithm Outline On each round, nature chooses a random training example (or mini-batch).
For each v ? V , each protagonist v selects her actions ?v ? ?v deterministically. The antagonist
and zannis then select their actions, which are best responses to the ?v and to each other.3 The
protagonist utilities Uvp are then calculated. Given the zanni and antagonist choices, Uvp is affine
?Uvp (?v )
?Lt
in the protagonist?s action, and also by Lemma 3 for all e ? Ev , we have ?w
=
?
?we . Each
e
protagonist v ? V then observes their utility and uses this to update their strategy. See Algorithm 1
for the general loop, and Algorithms 2, 3 and 4 for specific updates.
Given the characterization developed previously, we know that a Nash equilibrium will correspond to
a critical point in the training problem (which is almost certain to be a local minimum rather than
a saddle point [21]). It is interesting to note that the usual process of backpropagating the sampled
(sub)gradients corresponds to computing the best response actions for the zannis and the antagonist,
which then yields the resulting affine utility for the protagonists.
3.2
Experimental Evaluation
We conducted a set of experiments to investigate the plausibility of applying expert algorithms at each
vertex in a feedforward neural network. For comparison, we considered current methods for training
deep models, including SGD [3], SGD with momentum [33], RMSprop, Adagrad [6], and Adam
[17]. Since none of these impose constraints, they technically solve an easier optimization problem,
but they are also un-regularized and therefore might exhibit weaker generalization. We tuned the step
size parameter for each comparison method on each problem. For the expert algorithms, RM, EWA
and PSGD, we found that EWA and PSGD were not competitive, even after tuning their step sizes.
For RM, we initially found that it learned too quickly, with the top layers of the model becoming
sparse; however, we discovered that RM works remarkably well simply by initializing the cumulative
(0)
regret vectors rv with random values drawn from a Gaussian with large standard deviation ?.
As a sanity check, we first conducted experiments on synthetic combinatorial problems: ?parity?,
defined by y = x1 ? ? ? ? ? xm and ?folded parity?, defined by y = (x1 ? x2 ) ? ? ? ? ? (xm?1 ? xm )
[27]. Parity cannot be approximated by a single-layer model but is representable with a single hidden
layer of linear threshold gates [11], while folded parity is known to be not representable by a (small
weights) linear threshold circuit with only a single hidden layer; at least two hidden layers are required
[27]. For parity we trained a m-4m-1 architecture, and for folded parity we trained a m-4m-4m-1
architecture, both fully connected, m = 8. Here we chose the L1 constraint bound to be ? = 10
and the initialization scale as ? = 100. For the nonlinear activation functions we used a smooth
3
Conceptually, each zanni has a copy of the algorithm of each protagonist and an algorithm for selecting a
joint action for all antagonists and zannis, and thus do not technically depend upon ?v . In practice, these multiple
copies are unnecessary, and one merely calculates ?v ? ?v first.
6
(a) Learning Parity with logistic loss. (b) MNIST, full layers, train loss.
(d) Folded Parity, logistic loss.
(c) MNIST, full layers, test error.
(e) MNIST, convolutional, train loss. (f) MNIST, convolutional, test error.
Figure 2: Experimental results. (a) Parity, m-4m-1 architecture, 100 repeats. (d) Folded parity,
m-4m-4m-1 architecture, 100 repeats. (b) and (c): MNIST, 784-1024-1024-10 architecture, 10
repeats. (e) and (f): MNIST, 28?28-c(5?5, 64)-c(5?5, 64)-c(5?5, 64)-10 architecture, 10 repeats.
approximation of the standard ReLU gate fv (x) = ? log(1 + ex/? ) with ? = 0.5. The results shown
in Figure 2a and Figure 2d confirm that RM performs competitively, even when producing models
with sparsity, top to bottom, of 18% and 13% for parity, and 27%, 19% and 21% for folded parity.
We next conducted a few experiments on MNIST data. The first experiment used a fully connected 784-1024-1024-10 architecture, where RM was run with ? = 30 and initialization scales
(?1 , ?2 , ?3 ) = (50, 200, 50). The second experiment was run with a convolutional architecture
28?28-c(5?5, 64)-c(5?5, 64)-c(5?5, 64)-10 (convolution windows 5 ? 5 with depth 64), where RM
was run with (?1 , ?2 , ?3 , ?4 ) = (30, 30, 30, 10) and initialization scales ? = 500. The mini-batch
size was 100, and the x-axis in the plots give results after each ?update? batch of 600 mini-batches
(i.e. one epoch over the training data). The training loss and test loss are shown in Figures 2b, 2c,
2e and 2f, showing the evolution of the training loss and test misclassification errors. We dropped
all but SGD, Adam, RMSprop and RM here, since these seemed to dominate the other methods in
our experiments. It is surprising that RM can demonstrate convergence rates that are competitive
with tuned RMSprop, and even outperforms methods like SGD and Adam that are routinely used
in practice. An even more interesting finding is that the solutions found by RM were sparse while
achieving lower test misclassification errors than standard deep learning methods. In particular, in
the fully connected case, the final solution produced by RM zeroed out 32%, 26% and 63% of the
parameter matrices (from the input to the output layer) respectively. For the convolutional case, RM
zeroed out 29%, 27%, 28% and 43% of the parameter matrices respectively. Regarding run times,
we observed that our Tensorflow implementation of RM was only 7% slower than RMSProp on the
convolutional architecture, but 85% slower in the fully connected case.
4
Related Work
There are several works that consider using regret minimization to solve offline optimization problems.
Once stochastic gradient descent was connected to regret minimization in [4], a series of papers
followed [26, 25, 31]. Two popular approaches are currently Adagrad [6] and traditional stochastic
gradient descent. The theme of simplifying the loss is very common: it appears in batch gradient and
incremental gradient approaches [23] as the majorization-minimization family of algorithms. In the
7
regret minimization literature, the idea of simplifying the class of losses by choosing a minimizer
from a particular family of functions first appeared in [37], and has since been further developed.
By contrast, the history of using games for optimization has a much shorter history. It has been shown
that a game between people can be used to solve optimal coloring [16]. There is also a history of
using regret minimization in games: of interest is [38] that decomposes a single agent into multiple
agents, providing some inspiration for this paper. In the context of deep networks, a paper of interest
connects brain processes to prediction markets [1]. However, the closest work appears to be the
recent manuscript [2] that also poses the optimization of a deep network as a game. Although the
games described there are similar, unlike [2], we focus on differentiable activation functions, and
define agents with different information and motivations. Importantly, [2] does not characterize all
the Nash equilibria in the game proposed. We discuss these issues in more detail in Appendix J.
5
Conclusion
We have investigated a reduction of deep learning to game playing that allowed a bijection between
KKT points and Nash equilibria. One of the novel algorithms considered for supervised learning,
regret matching, appears to provide a competitive alternative that has the additional benefit of
achieving sparsity without unduly sacrificing speed or accuracy. It will be interesting to investigate
alternative training heuristics for deep games, and whether similar successes can be achieved on
larger deep models or recurrent models.
References
[1] D. Balduzzi. Cortical prediction markets. In Proceedings of the 2014 International Conference on
Autonomous Agents and Multi-agent Systems, pages 1265?1272, 2014.
[2] D. Balduzzi. Deep online convex optimization using gated games. http://arxiv.org/abs/1604.01952, 2016.
[3] L. Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade - Second Edition,
pages 421?436. 2012.
[4] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Transactions on Information Theory, 50(9):2050?2057, September 2004.
[5] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[6] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[7] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1 -ball for learning
in high dimensions. In Inter. Conf. on Machine Learning, pages 272?279, 2008.
[8] Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and Economic
Behavior, 29(1-2):79?103, 1999.
[9] G. Gordon. No-Regret algorithms for structured prediction problems. Technical Report CMU-CALD-05112, Carnegie Mellon University, 2005.
[10] G. Gordon. No-regret algorithms for online convex programs. In NIPS 19, 2006.
[11] A. Hajnal. Threshold circuits of bounded depth. JCSS, 46(2):129?154, 1993.
[12] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica,
68(5):1127?1150, 2000.
[13] K. Hoeffgen, H. Simon, and K. Van Horn. Robust trainability of single neurons. JCSS, 52(2):114?125,
1995.
[14] M. Johanson, N. Bard, N. Burch, and M. Bowling. Finding optimal abstract strategies in extensive form
games. In AAAI Conference on Artificial Intelligence, pages 1371?1379, 2012.
[15] W. Karush. Minima of functions of several variables with inequalities as side constraints. Master?s thesis,
Univ. of Chicago, Chicago, Illinois, 1939.
8
[16] M. Kearns, S. Suri, and N. Montfort. An experimental study of the coloring problem on human subject
networks. Science, 313:824?827, 2006.
[17] D. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[18] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?63, 1997.
[19] H. Kuhn and A. Tucker. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium, pages
481?492. University of California Press, 1951.
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[21] J. Lee, M. Simchowitz, M. Jordan, and B. Recht. Gradient Descent Only Converges to Minimizers. In
29th Annual Conference on Learning Theory, volume 49, 2016.
[22] N. Littlestone and M. Warmuth. The weighted majority algorithm. Inf. Comput., 108(2):212?261, 1994.
[23] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine
learning. SIAM Journal on Optimization, 25(2):829?855, 2015.
[24] A. Rakhlin and K. Sridharan. Optimization, learning, and games with predictable sequences. In Advances
in Neural Information Processing Systems 26, pages 3066?3074, 2013.
[25] N. Ratliff, D. Bagnell, and M. Zinkevich. Subgradient methods for structured prediction. In Eleventh
International Conference on Artificial Intelligence and Statistics (AISTATS-07), 2007.
[26] N. Ratliff, J. A. Bagnell, and M. Zinkevich. Maximum margin planning. In Twenty Second International
Conference on Machine Learning (ICML-06), 2006.
[27] A. Razborov. On small depth threshold circuits. In Algorithm Theory (SWAT 92), 1992.
[28] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2012.
[29] S. Shalev-Shwartz and Y. Singer. Convex repeated games and Fenchel duality. In NIPS 19, 2006.
[30] S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms. Machine
Learning, 69(2-3):115?142, 2007.
[31] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
svm. Mathematical programming, 127(1):3?30, 2011.
[32] N. Srinivasan, V. Ravichandran, K. Chan, J. Vidhya, S. Ramakirishnan, and S. Krishnan. Exponentiated
backpropagation algorithm for multilayer feedforward neural networks. In ICONIP, volume 1, 2002.
[33] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in
deep learning. In Proceedings ICML, pages 1139?1147, 2013.
[34] V. Syrgkanis, A. Agarwal, H. Luo, and R. Schapire. Fast convergence of regularized learning in games. In
Advances in Neural Information Processing Systems 28, pages 2971?2979, 2015.
[35] O. Tammelin, N. Burch, M. Johanson, and M. Bowling. Solving heads-up limit Texas hold?em. In
International Joint Conference on Artificial Intelligence, IJCAI, pages 645?652, 2015.
[36] V. Vazirani, N. Nisan, T. Roughgarden, and ? Tardos. Algorithmic Game Theory. Cambridge Press, 2007.
[37] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Twentieth
International Conference on Machine Learning, 2003.
[38] M. Zinkevich, M. Bowling, M. Johanson, and C. Piccione. Regret minimization in games with incomplete
information. In NIPS, 2007.
9
| 6315 |@word nd:1 diametrically:1 simplifying:2 sgd:6 accommodate:1 shot:2 recursively:1 reduction:12 uncovered:1 contains:1 selecting:1 series:1 denoting:1 tuned:3 document:1 outperforms:1 current:3 com:1 surprising:2 luo:1 activation:8 si:1 must:3 chicago:2 hajnal:1 gv:5 plot:1 update:5 intelligence:3 warmuth:2 accordingly:1 beginning:1 characterization:2 bijection:4 node:3 org:1 simpler:3 unbounded:1 mathematical:1 become:1 symposium:1 ik:2 viable:1 consists:1 eleventh:1 manner:1 introduce:3 inter:1 market:2 expected:2 behavior:1 roughly:1 planning:1 multi:1 brain:2 inspired:1 globally:1 alberta:1 window:1 considering:2 solver:1 spain:1 discover:1 underlying:1 bounded:3 circuit:4 moreover:1 tying:1 vidhya:1 minimizes:1 developed:4 finding:6 berkeley:1 act:2 rm:22 control:2 normally:1 producing:2 pitting:1 dropped:1 local:5 qualification:2 limit:1 consequence:1 despite:1 establishing:1 path:3 becoming:1 approximately:1 lugosi:1 might:3 chose:1 initialization:5 equivalence:1 specifying:1 suggests:1 factorization:1 averaged:2 directed:3 practical:1 unique:1 horn:1 lecun:1 practice:4 regret:26 backpropagation:1 procedure:1 empirical:1 matching:7 convenient:1 projection:1 cannot:1 onto:1 selection:2 pegasos:1 ravichandran:1 context:2 applying:1 zinkevich:5 equivalent:3 marten:1 yt:4 maximizing:1 syrgkanis:1 economics:1 independently:1 convex:21 simplicity:1 assigns:1 fik:1 importantly:1 dominate:1 handle:2 variation:1 traditionally:1 autonomous:1 razborov:1 tardos:1 pt:4 deploy:1 imagine:1 ualberta:1 target:1 massive:1 play:1 us:1 programming:3 trick:2 trend:1 satisfying:1 particularly:1 approximated:2 recognition:1 ocg:3 bottom:1 observed:1 jcss:2 solved:1 initializing:1 worst:1 ensures:1 cycle:2 connected:5 psgd:6 trade:1 observes:1 substantial:1 predictable:1 nash:21 complexity:2 rmsprop:4 econometrica:1 trained:3 raise:1 tight:1 solving:2 depend:2 technically:2 upon:1 basis:1 joint:16 routinely:1 train:2 univ:1 distinct:2 fast:1 effective:1 artificial:3 tell:1 outcome:1 choosing:1 shalev:5 sanity:1 whose:3 heuristic:2 apparent:1 solve:4 larger:1 say:1 ability:1 statistic:1 itself:1 final:1 online:10 advantage:2 differentiable:7 evidently:1 sequence:2 simchowitz:1 propose:1 aligned:1 loop:3 ocp:3 iff:2 achieve:2 exploiting:1 convergence:4 sutskever:1 requirement:1 ijcai:1 adam:4 leave:1 incremental:2 object:1 converges:1 develop:1 recurrent:1 pose:1 progress:1 kuhn:1 closely:1 stochastic:10 hull:1 human:1 generalization:3 karush:1 anticipate:1 im:1 hold:2 considered:7 exp:3 equilibrium:23 mapping:1 algorithmic:1 pointing:1 label:2 combinatorial:1 currently:1 establishes:1 qv:1 weighted:2 cotter:1 minimization:7 gaussian:2 always:1 modified:1 rather:1 avoid:1 johanson:3 corollary:1 focus:2 xit:1 vk:1 check:1 greatly:1 contrast:1 minimizers:3 typically:1 bt:20 entire:2 initially:1 her:1 hidden:3 dlp:3 i1:1 interested:1 selects:2 issue:1 arg:1 among:1 flexible:1 denoted:2 dual:1 proposes:1 art:1 constrained:8 softmax:3 equal:1 aware:2 construct:1 once:1 icml:2 alter:1 others:1 report:1 gordon:2 tammelin:1 perpendicularly:1 few:1 randomly:1 simultaneously:2 uvp:4 connects:1 attempt:1 ab:2 dvt:8 interest:2 investigate:6 possibility:1 evaluation:2 primal:2 edge:8 capable:1 partial:3 necessary:1 respective:1 shorter:1 incomplete:1 littlestone:1 re:1 plotted:1 sacrificing:1 subfigure:2 fenchel:1 column:1 retains:1 assignment:1 cost:1 vertex:24 deviation:1 predictor:2 successful:1 colell:1 conducted:5 too:1 characterize:1 answer:1 sv:4 synthetic:5 chooses:9 person:3 recht:1 international:5 siam:1 lee:1 off:1 quickly:1 thesis:1 central:1 cesa:2 qvt:8 opposed:1 choose:2 aaai:1 conf:1 expert:12 derivative:1 strive:1 return:1 leading:1 potential:1 nisan:1 performed:1 later:1 root:1 multiplicative:1 servant:1 hazan:1 competitive:6 start:1 simon:1 contribution:3 minimize:2 square:1 majorization:2 accuracy:1 convolutional:7 correspond:2 identify:1 yield:1 yes:1 conceptually:1 produced:1 none:1 history:3 simultaneous:3 definition:1 infinitesimal:1 against:2 tucker:1 obvious:1 proof:1 sampled:1 proved:1 begun:1 popular:1 knowledge:2 ut:2 back:2 coloring:2 appears:6 ok:1 manuscript:1 supervised:14 specify:2 olg:5 response:6 nonlinear:2 google:4 interfere:1 defines:1 logistic:4 concept:3 normalized:1 cald:1 hoeffgen:1 evolution:1 regularization:4 assigned:1 hence:1 inspiration:1 round:8 daes:1 game:51 self:1 encourages:1 bowling:3 backpropagating:1 generalized:2 antagonist:22 iconip:1 outline:2 tt:3 demonstrate:4 performs:1 l1:8 duchi:2 regularizing:1 consideration:1 novel:2 recently:2 suri:1 superior:1 common:2 attached:1 sabbatical:1 volume:2 significant:1 mellon:1 olp:6 imposing:1 cambridge:2 tuning:2 unconstrained:1 uv:2 illinois:1 longer:1 add:3 multivariate:2 own:1 closest:1 recent:1 perspective:2 chan:1 inf:1 certain:1 inequality:1 arbitrarily:1 success:1 devise:1 preserving:1 minimum:6 additional:3 gentile:1 impose:3 determine:1 rv:13 multiple:2 full:2 smooth:2 technical:1 plausibility:1 offer:2 hart:1 paired:1 controlled:1 calculates:1 prediction:7 basic:1 multilayer:1 expectation:1 chandra:1 cmu:1 arxiv:1 sometimes:1 agarwal:1 achieved:5 uvt:2 ewa:7 remarkably:1 utu:1 zanni:7 operate:1 unlike:2 posse:2 ascent:1 subject:1 sridharan:1 effectiveness:2 jordan:1 presence:1 feedforward:5 bengio:1 krishnan:1 affect:1 relu:2 architecture:11 perfectly:1 reduce:1 idea:2 regarding:1 economic:1 haffner:1 texas:1 whether:4 utility:19 passed:1 nomenclature:1 action:40 deep:27 ignored:1 reduced:1 generate:2 http:1 schapire:2 estimated:1 carnegie:1 hyperparameter:2 incentive:1 srinivasan:1 key:4 nevertheless:2 threshold:4 achieving:2 drawn:2 dahl:1 v1:1 graph:4 subgradient:2 merely:3 sum:3 run:5 master:1 extends:1 almost:1 reasonable:1 family:2 appendix:15 layer:17 bound:6 ct:9 followed:1 annual:1 roughgarden:1 constraint:16 burch:2 x2:1 generates:1 speed:2 argument:2 separable:2 martin:1 xtk:1 structured:2 ball:3 representable:2 across:1 em:1 rev:1 modification:1 restricted:1 previously:3 discus:1 singer:5 know:1 competitively:1 apply:3 observe:1 appropriate:1 generic:1 alternative:5 batch:7 gate:6 rp:1 slower:2 original:1 denotes:1 running:3 top:2 qit:1 exploit:1 k1:2 balduzzi:2 establish:1 move:3 objective:1 question:1 noticed:1 occurs:1 strategy:15 dependence:2 usual:1 traditional:1 bagnell:2 exhibit:1 gradient:18 september:1 majority:1 outer:1 polytope:2 toward:1 assuming:1 bard:1 o1:1 relationship:2 mini:5 providing:2 minimizing:3 equivalently:1 difficult:1 ba:1 ratliff:2 implementation:1 motivates:1 twenty:1 perform:1 gated:1 bianchi:2 observation:1 convolution:1 neuron:1 acknowledge:1 finite:1 descent:7 extended:3 hinton:1 head:1 rn:9 discovered:1 introduced:1 unpublished:1 required:3 specified:2 extensive:1 connection:6 california:1 fv:8 learned:1 unduly:1 tensorflow:1 barcelona:1 kingma:1 nip:4 below:1 usually:1 ev:3 xm:3 appeared:1 sparsity:3 program:1 max:2 including:1 explanation:1 everyone:1 critical:5 misclassification:2 regularized:2 kivinen:1 improve:1 axis:2 deviate:1 epoch:1 literature:2 l2:1 adagrad:2 freund:1 loss:21 expect:2 fully:4 piccione:1 interesting:6 acyclic:4 versus:1 srebro:1 foundation:1 agent:13 incident:1 affine:8 zeroed:2 playing:8 surprisingly:1 repeat:6 free:3 last:1 parity:12 copy:2 appreciated:1 bias:1 exponentiated:4 allow:1 weaker:1 offline:1 side:1 face:1 characterizing:1 sparse:3 benefit:1 van:1 boundary:1 dimension:3 calculated:2 depth:3 cumulative:1 cortical:1 seemed:1 dale:1 adaptive:3 projected:2 far:1 transaction:1 vazirani:1 confirm:1 global:8 kkt:7 reveals:2 incoming:2 mairal:1 assumed:1 unnecessary:1 shwartz:5 continuous:1 un:1 decomposes:1 additionally:1 learn:1 transfer:2 nature:4 ca:1 correlated:1 robust:1 schuurmans:1 expansion:4 investigated:3 complex:1 bottou:2 vj:2 did:1 pk:1 main:2 aistats:1 linearly:2 motivation:2 edition:1 allowed:1 repeated:1 x1:2 advice:5 akt:1 sub:2 momentum:2 theme:1 wish:2 deterministically:1 comput:1 third:1 theorem:4 specific:2 xt:7 showing:2 rakhlin:1 svm:1 exists:3 consist:1 mnist:10 corr:1 importance:1 margin:2 sparser:2 easier:1 uok:1 lt:5 logarithmic:2 simply:2 univariate:1 saddle:1 twentieth:1 expressed:4 conconi:1 scalar:3 nominally:1 corresponds:3 minimizer:3 satisfies:3 ma:1 identity:1 goal:1 minorant:3 hard:1 infinite:1 folded:6 reducing:1 hyperplane:1 surpass:1 lemma:3 kearns:1 total:2 duality:2 experimental:3 trainability:1 player:16 swat:1 select:1 people:1 outgoing:1 trainable:2 ex:1 |
5,876 | 6,316 | Satisfying Real-world Goals with Dataset Constraints
Gabriel Goh
Dept. of Mathematics
UC Davis
Davis, CA 95616
[email protected]
Andrew Cotter, Maya Gupta
Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA 94043
[email protected]
[email protected]
Michael Friedlander
Dept. of Computer Science
University of British Columbia
Vancouver, B.C. V6T 1Z4
[email protected]
Abstract
The goal of minimizing misclassification error on a training set is often just one of
several real-world goals that might be defined on different datasets. For example,
one may require a classifier to also make positive predictions at some specified
rate for some subpopulation (fairness), or to achieve a specified empirical recall.
Other real-world goals include reducing churn with respect to a previously deployed model, or stabilizing online training. In this paper we propose handling
multiple goals on multiple datasets by training with dataset constraints, using the
ramp penalty to accurately quantify costs, and present an efficient algorithm to
approximately optimize the resulting non-convex constrained optimization problem.
Experiments on both benchmark and real-world industry datasets demonstrate the
effectiveness of our approach.
1
Real-world goals
We consider a broad set of design goals important for making classifiers work well in real-world
applications, and discuss how metrics quantifying many of these goals can be represented in a
particular optimization framework. The key theme is that these metrics, which range from the
standard precision and recall, to less well-known examples such as coverage and fairness [17, 27, 15],
and including some new proposals, can be expressed in terms of the positive and negative classification
rates on multiple datasets.
Coverage: One may wish to control how often a classifier predicts the positive (or negative) class.
For example, one may want to ensure that only 10% of customers are selected to receive a printed
catalog due to budget constraints, or perhaps to compensate for a biased training set. In practice,
constraining the ?coverage rate? (the expected proportion of positive predictions) is often easier than
measuring e.g. accuracy or precision because coverage can be computed on unlabeled data?labeling
data can be expensive, but acquiring a large number of unlabeled examples is often very easy.
Coverage was also considered by Mann and McCallum [17], who proposed what they call ?label
regularization?, in which one adds a regularizer penalizing the relative entropy between the mean
score for each class and the desired distribution, with an additional correction to avoid degeneracies.
Churn: Work does not stop once a machine learning model has been adopted. There will be new
training data, improved features, and potentially new model structures. Hence, in practice, one will
deploy a series of models, each improving slightly upon the last. In this setting, determining whether
each candidate should be deployed is surprisingly challenging: if we evaluate on the same held-out
testing set every time a new candidate is proposed, and deploy it if it outperforms its predecessor, then
every compare-and-deploy decision will increase the statistical dependence between the deployed
model and the testing dataset, causing the model sequence to fit the originally-independent testing
data. This problem is magnified if, as is typical, the candidate models tend to disagree only on a
relatively small number of examples near the true decision boundary.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
A simple and safe solution is to draw a fresh testing sample every time one wishes to compare two
models in the sequence, only considering examples on which the two models disagree. Because
labeling data is expensive, one would like these freshly sampled testing datasets to be as small as
possible. It is here that the problem of ?churn? arises. Imagine that model A, our deployed model,
is 70% accurate, and that model B, our candidate, is 75% accurate. In the best case, only 5% of
test samples would be labeled differently, and all differences would be ?wins? for classifier B. Then
only a dozen or so examples would need to be labeled in order to establish that B is the statistically
significantly better classifier with 95% confidence. In the worst case, model A would be correct and
model B incorrect 25% of the time, model B correct and model A incorrect 30% of the time, and
both models correct the remaining 45% of the time. Then 55% of testing examples will be labeled
differently, and closer to 1000 examples would need to be labeled to determine that model B is better.
We define the ?churn rate? as the expected proportion of examples on which the prediction of the
model being considered (model B above) differs from that of the currently-deployed model (model A).
During training, we propose constraining the empirical churn rate with respect to a given deployed
model on a large unlabeled dataset (see also Fard et al. [12] for an alternative approach).
Stability: A special case of minimizing churn is to ensure stability of an online classifier as it
evolves, by constraining it to not deviate too far from a trusted classifier on a large held-out unlabeled
dataset.
Fairness: A practitioner may be required to guarantee fairness of a learned classifier, in the sense
that it makes positive predictions on different subgroups at certain rates. For example, one might
require that housing loans be given equally to people of different genders. Hardt et al. [15] identify
three types of fairness: (i) demographic parity, in which positive predictions are made at the same
rate on each subgroup, (ii) equal opportunity, in which only the true positive rates must match, and
(iii) equalized odds, in which both the true positive rates and false positive rates must match. Fairness
can also be specified by a proportion, such as the 80% rule in US law that certain decisions must be
in favor of group B individuals at least 80% as often as group A individuals [e.g. 3, 26, 27, 15].
Zafar et al. [27] propose learning fair classifiers by imposing linear constraints on the covariance
between the predicted labels and the values of certain features, while Hardt et al. [15] propose first
learning an ?unfair? classifier, and then choosing population-dependent thresholds to satisfy the
desired fairness criterion. In our framework, rate constraints such as those mentioned above can be
imposed directly, at training time.
Recall and Precision: Requirements of real-world classifiers are often expressed in terms of
precision and recall, especially when examples are highly imbalanced between positives and negatives.
In our framework, we can handle this problem via Neyman-Pearson classification [e.g. 23, 9], in
which one seeks to minimize the false negative rate subject to a constraint on the false positive rate.
Indeed, our ramp-loss formulation is equivalent to that of Gasso et al. [13] in this setting.
Egregious Examples: For certain classification applications, examples may be discovered that are
particularly embarrassing if classified incorrectly. One standard approach to handling such examples
is to increase their weights during training, but this is difficult to get right: too large a weight may
distort the classifier too much in the surrounding feature space, whereas too small a weight may not
fix the problem. Worse, over time the dataset will often be augmented with new training examples
and new features, causing the ideal weights to drift. We propose instead simply adding a constraint
ensuring that some proportion of a set of such egregious examples is correctly classified. Such
constraints should be used with extreme care, since they can cause the problem to become infeasible.
2
Optimization problem
A key aspect of many of the goals of Section 1 is that they are defined on different datasets. For
example, we might seek to maximize the accuracy on a set of labeled examples drawn in some biased
manner, require that its recall be at least 90% on 50 small datasets sampled in an unbiased manner
from 50 different countries, desire low churn relative to a deployed classifier on a large unbiased
unlabeled dataset, and require that 100 given egregious examples be classified correctly.
Another characteristic common to the metrics of Section 1 is that they can be expressed in terms of
the positive and negative classification rates on various datasets. We consider only unlabeled datasets,
as described in Table 1?a dataset with binary labels, for example, would be handled by partitioning
it into the two unlabeled datasets D+ and D? containing the positive and negative examples,
2
Notation
D
D+ , D?
D++ , D+? , D?+ , D??
DA , DB
Table 1: Dataset notation.
Dataset
Any dataset
Sets of examples labeled positive/negative, respectively
Sets of examples with ground-truth positive/negative labels, and for
which a baseline classifier makes positive/negative predictions
Sets of examples belonging to subpopulation A and B, respectively
Table 2: The quantities discussed in Section 1, expressed in the notation used in Problem 1, with the
dependence on w and b dropped for notational simplicity, and using the dataset notation of Table 1.
Metric
Coverage rate
#TP, #TN, #FP, #FN
#Errors
Error rate
Recall
#Changes
Churn rate
Fairness constraint
Equal opportunity constraint
Egregious example constraint
Expression
sp (D)
|D+ | sp (D+ ), |D? | sn (D? ), |D? | sp (D? ), |D+ | sn (D+ )
#FP + #FN
#Errors/ (|D+ | + |D? |)
#TP/ (#TP + #FN) = #TP/ |D+ |
|D+? | sp (D+? ) + |D?+ | sn (D?+ ) + |D+? | sp (D+? ) +
|D?+ | sn (D?+ )
++
#Changes/
+|D+? | + |D?+ | + |D?? |)
(|D | B
A
sp D ? ?sp D , where ? >0
sp DA ? D+ ? ?sp DB ? D+ , where ? > 0
sp (D+ ) ? ? and/or sn (D? ) ? ? for a dataset D of egregious
examples, where ? ? [0, 1]
respectively. We wish to learn a linear classification function f (x) = hw, xi ? b parameterized by a
weight vector w ? Rd and bias b ? R, for which the positive and negative classification rates are:
P
1
sp (D; w, b) = |D|
sn (D; w, b) = sp (D; ?w, ?b) ,
(1)
x?D 1 (hw, xi ? b) ,
where 1 is an indicator function that is 1 if its argument is positive, 0 otherwise. In words, sp (D; w, b)
and sn (D; w, b) denote the proportion of positive or negative predictions, respectively, that f makes
on D. Table 2 specifies how the metrics of Section 1 can be expressed in terms of the sp s and sn s.
We propose handling these goals by minimizing an `2 -regularized positive linear combination of
prediction rates on different datasets, subject to upper-bound constraints on other positive linear
combinations of such prediction rates:
Problem 1. Starting point: discontinuous constrained problem
Pk (0)
(0)
2
minimize
?i sp (Di ; w, b) + ?i sn (Di ; w, b) + ?2 kwk2
i=1
w?Rd ,b?R
Pk (j)
(j)
(j)
s.t.
j ? {1, . . . , m}.
i=1 ?i sp (Di ; w, b) + ?i sn (Di ; w, b) ? ?
Here, ? is the parameter on the `2 regularizer, there are k unlabeled datasets D1 , . . . , Dk and m
constraints. The metrics minimized by the objective and bounded by the constraints are specified
(0)
(0)
(j)
(j)
via the choices of the nonnegative coefficients ?i , ?i , ?i , ?i and upper bounds ? (j) for the
ith dataset and, where applicable, the jth constraint?a user should base these choices on Table 2.
Note that because sp + sn = 1, it is possible to transform any linear combination of rates into an
equivalent positive linear combination, plus a constant (see Appendix B1 for an example).
We cannot optimize Problem 1 directly because the rate functions sp and sn are discontinuous. We
can, however, work around this difficulty by training a classifier that makes randomized predictions
based on the ramp function [7]:
?(z) = max{0, min{1, 1/2 + z}},
1
Appendices may be found in the supplementary material
3
(2)
Algorithm 1 Proposed majorization-minimization procedure for (approximately) optimizing Problem 2. Starting from an initial feasible solution w(0) , b0 , we repeatedly find a convex upper bound
problem that is tight at the current candidate solution, and optimize it to yield the next candidate. See
Section 2.1 for details, and Section 2.2 for how one can perform the inner optimizations on line 3.
MajorizationMinimization w(0) , b0 , T
1
For t ? {1, 2, . . . , T }
2
Construct an instance of Problem 3 with w0 = w(t?1) and b0 = bt?1
3
Optimize this convex optimization problem to yield w(t) and bt
4
Return w(t) , bt
where the randomized classifier parameterized by w and b will make a positive prediction on x with
probability ? (hw, xi ? b), and a negative prediction otherwise (see Appendix A for more on this
randomized classification rule). For this randomized classifier, the expected positive and negative
rates will be:
P
1
rn (D; w, b) = rp (D; ?w, ?b) .
(3)
rp (D; w, b) = |D|
x?D ? (hw, xi ? b) ,
Using these expected rates yields a continuous (but non-convex) analogue of Problem 1:
Problem 2. Ramp version of Problem 1
Pk (0)
(0)
2
minimize
?
r
(D
;
w,
b)
+
?
r
(D
;
w,
b)
+ ?2 kwk2
p
i
n
i
i
i
i=1
w?Rd ,b?R
Pk (j)
(j)
s.t.
?
r
(D
;
w,
b)
+
?
r
(D
;
w,
b)
? ? (j) j ? {1, . . . , m}.
p
i
n
i
i
i
i=1
Efficient optimization of this problem is the ultimate goal of this section. In Section 2.1, we will
propose a majorization-minimization approach that sequentially minimizes convex upper bounds
on Problem 2, and, in Section 2.2, will discuss how these convex upper bounds may themselves be
efficiently optimized.
2.1
Optimizing the ramp problem
To address the non-convexity of Problem 2, we will
iteratively optimize approximations, by, starting
from an feasible initial candidate solution, constructing a convex optimization problem upperbounding Problem 2 that is tight at the current candidate, optimizing this convex problem to yield the
next candidate, and repeating.
1
0.5
0
Our choice of a ramp for ? makes finding such tight
convex upper bounds easy: both the hinge function
-1
-0.5
0
0.5
1
max {0, 1/2 + z} and constant-1 function are upper
bounds on ?, with the former being tight for all
z ? 1/2, and the latter for all z ? 1/2 (see Figure 1). Figure 1: Convex upper bounds on the ramp
We?ll therefore define the following upper bounds function ?(z) = max {0, min {1, 1/2 + z}}.
on ? and 1 ? ?, with the additional parameter z 0 Notice that the hinge bound (red) is tight for
determining which of the two bounds (hinge or all z ? 1/2, and the constant bound (blue) is
constant) will be used, such that the bounds will tight for all z ? 1/2.
always be tight for z = z 0 :
max {0, 1/2 + z}
if z 0 ? 1/2
?
?p (z; z 0 ) =
,
?
?n (z; z 0 ) = ?
?p (?z; ?z 0 ) .
(4)
1
otherwise
Based upon these we define the following upper bounds on the expected rates:
P
1
r?p (D; w, b; w0 , b0 ) = |D|
?p (hw, xi ? b; hw0 , xi ? b0 )
x?D ?
P
1
r?n (D; w, b; w0 , b0 ) = |D|
?n (hw, xi ? b; hw0 , xi ? b0 ) ,
x?D ?
(5)
which have the properties that both r?p and r?n are convex in w and b, are upper bounds on the original
ramp-based rates:
r?p (D; w, b; w0 , b0 ) ? rp (D; w, b)
and r?n (D; w, b; w0 , b0 ) ? rn (D; w, b) ,
4
Algorithm 2 Skeleton of a cutting-plane algorithm that optimizes Equation 6 to within for v ? V,
where V ? Rm is compact and convex. Here, l0 , u0 ? R are finite with l0 ? maxv?V z(v) ? u0 .
There are several options for the CutChooser function on line 8?please see Appendix E for details.
The SVMOptimizer function returns w(t) and bt approximately minimizing ?(w, b, v (t) ; w0 , b0 ), and
a lower bound lt ? z(v) for which ut ? lt ? t for ut as defined on line 10.
CuttingPlane (l0 , u0 , V, )
1
Initialize g (0) ? Rm to the all-zero vector
2
For t ? {1, 2, . . . }
3
Let ht (v) = mins?{0,1,...,t?1} us + g (s) , v ? v (s)
4
Let Lt = maxs?{0,1,...,t?1} ls and Ut = maxv?V ht (v)
5
If Ut ? Lt ? then
6
Let s ? {1, . . . , t ? 1} be an index maximizing ls
7
Return w(s) , bs , v (s)
8
Let v (t) , t = CutChooser (ht , Lt )
9
Let w(t) , bt , lt = SVMOptimizer v (t) , ht v (t) , t
10
Let ut = ?(w(t) , bt , v (t) ; w0 , b0 ) and g (t) = ?v ?(w(t) , bt , v (t) ; w0 , b0 )
and are tight at w0 , b0 :
r?p (D; w0 , b0 ; w0 , b0 ) = rp (D; w0 , b0 )
and
r?n (D; w0 , b0 ; w0 , b0 ) = rn (D; w0 , b0 ) .
Substituting these bounds into Problem 2 yields:
Problem 3. Convex upper bound on Problem 2
Pk (0)
(0)
2
0 0
0 0
minimize
?
r
?
(D
;
w,
b;
w
,
b
)
+
?
r
?
(D
;
w,
b;
w
,
b
)
+ ?2 kwk2
p
i
n
i
i
i
i=1
w?Rd ,b?R
Pk (j)
(j)
0 0
0 0
s.t.
?
r
?
(D
;
w,
b;
w
,
b
)
+
?
r
?
(D
;
w,
b;
w
,
b
)
? ? (j) j ? {1, . . . , m}.
p
i
n
i
i
i
i=1
As desired, this problem upper bounds Problem 2, is tight at w0 , b0 , and is convex (because any
positive linear combination of convex functions is convex).
Algorithm 1 contains our proposed procedure for approximately solving Problem 2. Given an initial
feasible solution, it?s straightforward to verify inductively, using the fact that we construct tight
convex upper bounds at every step, that every convex subproblem will have a feasible solution,
every (w(t) , bt ) pair will be feasible w.r.t. Problem 2, and every (w(t+1) , bt+1 ) will have an objective
function value that is no larger that that of (w(t) , bt ). In other words, no iteration can make negative
progress. The non-convexity of Problem 2, however, will cause Algorithm 1 to arrive at a suboptimal
solution that depends on the initial (w(0) , b0 ).
2.2
Optimizing the convex subproblems
The first step in optimizing Problem 3 is to add Lagrange multipliers v over the constraints, yielding
the equivalent unconstrained problem:
maximize z(v) = min ? (w, b, v; w0 , b0 ) ,
v0
w,b
(6)
where the function:
Pk (0) Pm
(j)
? (w, b, v; w0 , b0 ) = i=1 ?i + j=1 vj ?i
r?p (Di ; w, b; w0 , b0 )
(7)
Pm
Pm
(0)
(j)
2
+ ?i + j=1 vj ?i
r?n (Di ; w, b; w0 , b0 ) + ?2 kwk2 ? j=1 vj ? (j)
is convex in w and b, and concave in the multipliers v. For the purposes of this section, w0 and b0 ,
which were found in the previous iteration of Algorithm 1, are fixed constants.
Because this is a convex-concave saddle point problem, there are a large number of optimization
techniques that could be successfully applied. For example, in settings similar to our own, Eban et al.
[10] simply perform SGD jointly over all parameters (including v), while Gasso et al. [13] use the
Uzawa algorithm, which would alternate between (i) optimizing exactly over w and b, and (ii) taking
gradient steps on v.
5
We instead propose an approach for which, in our setting, it is particularly easy to create an efficient
implementation. The key insight is that evaluating z(v) is, thanks to our use of hinge and constant
upper-bounds on our ramp ?, equivalent to optimization of a support vector machine (SVM) with perexample weights?see Appendix F for details. This observation enables us to solve the saddle system
in an inside-out manner. On the ?inside?, we optimize over (w, b) for fixed v using an off-the-shelf
SVM solver [e.g. 6]. On the ?outside?, the resulting (w, b)-optimizer is used as a component in a
cutting-plane optimization over v. Notice that this outer optimization is very low-dimensional, since
v ? Rm , where m is the number of constraints.
Algorithm 2 contains a skeleton of the cutting-plane algorithm that we use for this outer optimization
over v. Because this algorithm is intended to be used as an outer loop in a nested optimization
routine, it does not expect that z(v) can be evaluated or differentiated exactly. Rather, it?s based upon
the idea of possibly making ?shallow? cuts [4] by choosing a desired accuracy t at each iteration,
and expecting the SVMOptimizer to return a solution with suboptimality t . More precisely, the
SVMOptimizer function approximately evaluates z(v (t) ) for a given fixed v (t) by constructing the
corresponding SVM problem and finding a (w(t) , bt ) for which the primal and dual objective function
values differ by at most t .
After finding (w(t) , bt ), the SVMOptimizer then evaluates the dual objective function value of
the SVM to determine lt . The primal objective function value
ut and its gradient
g (t) w.r.t. v
(t)
(t)
(calculated on line 10 of Algorithm 2) define the cut ut + g , v ? v . Notice that since
?(w(t) , bt , v; w0 , b0 ) is a linear function of v, it is equal to this cut function, which therefore upperbounds minw,b ?(w, b, v; w0 , b0 ).
One advantage of this cutting-plane formulation is that typical CutChooser implementations will
choose t to be large in the early iterations, and will only shrink it to be or smaller once we?re close
to convergence. We leave the details of the analysis to Appendices E and F?a summary can be found
in Appendix G.
3
Related work
The problem of finding optimal trade-offs in the presence of multiple objectives has been studied
generically in the field of multi-objective optimization [18]. Two common approaches are (i)
linear scalarization [18, Section 3.1], and (ii) the method of -constraints [18, Section 3.2]. Linear
scalarization reduces to the common heuristic of reweighting groups of examples. The method of
-constraints puts hard bounds on the magnitudes of secondary objectives, like our dataset constraints.
Notice that, in our formulation, the Lagrange multipliers v play the role of the weights in the linear
scalarization approach, with the difference being that, rather than being provided directly by the
user, they are dynamically chosen to satisfy constraints. The user controls the problem through these
constraint choices, which have concrete real-world meanings.
While the hinge loss is one of the most commonly-used convex upper bounds on the 0/1 loss [22],
we use the ramp loss, trading off convexity for tightness. For our purposes, the main disadvantage of
the hinge loss is that it is unbounded, and therefore cannot distinguish a single very bad example from
say, 10 slightly bad ones, making it ill-suited for constraints on rates. In contrast, for the ramp loss
the contribution of any single datum is bounded, no matter how far it is from the decision boundary.
The ramp loss has also been investigated in Collobert et al. [7] (without constraints). Gasso et al.
[13] use the ramp loss both in the objective and constraints, but their algorithm only tackles the
Neyman-Pearson problem. They compared their classifier to that of Davenport et al. [9], which differs
in that it uses a hinge relaxation instead of the ramp loss, and found with the ramp loss they achieved
similar or slightly better results with up to 10? less computation (our approach does not enjoy this
computational speedup).
Narasimhan et al. [19] considered optimizing the F-measure and other quantities that can be written
as concave functions of the TP and TN rates. Their proposed stochastic dual solver adaptively
linearizes concave functions of the rate functions (Equation 1). Joachims [16] indirectly optimizes
upper-bounds on functions of sp (D+ ), sp (D? ), sn (D+ ), sn (D? ) using a hinge loss approximation.
Finally, for some simple problems (particularly when there is only one constraint), the goals in
Section 1 can be coarsely handled by simple bias-shifting, i.e. first training an unconstrained classifier,
and then attempting to adjust the decision threshold to satisfy the constraints as a second step.
6
0.2
Proposed (deterministic)
Proposed (stochastic)
Zafar et al.
Error Rate
0.19
0.18
0.17
0.16
0.15
1
2
3
4
5
6
Fairness ratio
Figure 2: Blue dots: our proposal, with the classification functions? predictions being deterministically thresholded at zero. Red dots: same, but using the randomized classification rule described in
Section 2. Green dots: Zafar et al. [27]. Green
SVM. (Left) Test set error plotted
line: unconstrained
vs. observed test set fairness ratio sp DM /sp DF . (Right) The 1/? hyper-parameter used to
specify the desired fairness in the proposed method, and the observed fairness ratios of our classifiers
on the test data. All points are averaged over 100 runs.
4
Experiments
We evaluate the performance of the proposed approach in two experiments, the first using a benchmark
dataset for fairness, and the second on a real-world problem with churn and recall constraints.
4.1
Fairness
We compare training for fairness on the Adult dataset 2 , the same dataset used by Zafar et al. [27].
The 32 561 training and 16 281 testing examples, derived from the 1994 Census, are 123-dimensional
and sparse. Each feature contains categorical attributes such as race, gender, education levels and
relationship status. A positive class label means that individual?s income exceeds 50k. Let DM
and DF denote the sets of male and female examples. The number of positive labels in DM is
F
roughly six times that
of D . The goal is to train a classifier that respects the fairness constraint
sp DM ? sp DF /? for a parameter ? ? (0, 1] (where ? = 0.8 corresponds to the 80% rule
mentioned in Section 1).
Our publicly-available Julia implementation3 for these experiments uses LIBLINEAR [11] with
the default parameters (most notably ? = 1/n ? 3 ? 10?5 ) to implement the SVMOptimizer
function, and does not include an unregularized bias b. The outer optimization over v does not use
the m-dimensional cutting plane algorithm of Algorithm 2, instead using a simpler one-dimensional
variant (observe that these experiments involve only one constraint). The majorization-minimization
procedure starts from the all-zeros vector (w(0) in Algorithm 1).
We compare to the method of Zafar et al. [27], which proposed handling fairness with the constraint:
?1 P
F ?1 P
hw, x
?i ? c,
x
? = DM
(8)
x?D M x ? D
x?D F x.
An SVM subject to this constraint (see Appendix D for details), for a range of c values, is our baseline.
Results in Figure 2 show the proposed method is much more accurate for any desired fairness, and
achieves fairness ratios not reachable with the approach of Zafar et al. [27] for any choice of c. It is
also easier to control: the values of c in Zafar et al. [27] do not have a clear interpretation, whereas ?
is an effective proxy for the fairness ratio.
4.2
Churn
Our second set of experiments demonstrates meeting real-world requirements on a proprietary
problem from Google: predicting whether a user interface element should be shown to a user, based
2
3
?a9a? from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html
https://github.com/gabgoh/svmc.jl
7
0.22
0.04
0.21
Error rate
Churn
0.05
0.03
Proposed (deterministic)
Proposed (stochastic)
Thresholded SVM
Testing
Training
0.02
0.03
0.04
0.05
0.2
0.19
0.18
0.03
0.06
0.04
0.05
0.06
Churn target
Churn target
Figure 3: Blue: our proposal, with the classification functions? predictions being deterministically
thresholded at zero. Red: same, but using the randomized classification rule described in Section 2.
Green: unconstrained SVM trained on D1 ? D2 , then thresholded (by shifting the bias b) to satisfy
the recall constraint on D2 . Dashed and dotted curves denote results on the testing and training
datasets, respectively. (Left) Observed churn (vertical axis) vs. the churn target used during training
(horizontal axis), on the unlabeled dataset D3 . (Right) Empirical error rates (vertical axis) vs. the
churn target, on the union D1 ? D2 of the two labeled datasets. All curves are averaged over 10 runs.
on a 31-dimensional vector of informative features, which is mapped to a roughly 30 000-dimensional
feature vector via a fixed kernel function ?. We train classifiers that are linear with respect to ?(x).
We are given the currently-deployed model, and seek to train a classifier that (i) has high accuracy,
(ii) has no worse recall than the deployed model, and (iii) has low churn w.r.t. the deployed model.
We are given three datasets, D1 , D2 and D3 , consisting of 131 840, 53 877 and 68 892 examples,
respectively. The datasets D1 and D2 are hand-labeled, while D3 is unlabeled. In addition, D1 was
chosen via active sampling, while D2 and D3 are sampled i.i.d. from the underlying data distribution.
For all three datasets, we split out 80% for training and reserved 20% for testing. We address the three
goals in the proposed framework by simultaneously training the classifier to minimize the number of
errors on D1 plus the number of false positives on D2 , subject to the constraints that the recall on
D2 be at least as high as the deployed model?s recall (we?re essentially performing Neyman-Pearson
classification on D2 ), and that the churn w.r.t. the deployed model on D3 be no larger than a given
target parameter.
These experiments use a proprietary C++ implementation of Algorithm 2, using the combined SDCA
and cutting plane approach of Appendix F to implement the inner optimizations over w and b, with
the CutChooser helper functions being as described in Appendices E.1 and F.2.1. We performed 5
iterations of the majorization-minimization procedure of Algorithm 1.
Our baseline is an unconstrained SVM that is thresholded after training to achieve the desired recall,
but makes no effort to minimize churn. We chose the regularization parameter ? using a power-of-10
grid search, found that 10?7 was best for this baseline, and then used ? = 10?7 for all experiments.
The plots in Figure 3 show the achieved churn and error rates on the training and testing sets for a
range of churn constraint values (red and blue curves), compared to the baseline thresholded SVM
(green lines). When using deterministic thresholding of the learned classifier (the blue curves, which
significantly outperformed randomized classification?the red curves), the proposed method achieves
lower churn and better accuracy for all targeted churn rates, while also meeting the recall constraint.
As expected, the empirical churn is extremely close to the targeted churn on the training set when
using randomized classification (red dotted curve, left plot), but less so on the 20% held-out test set
(red dashed curve). We hypothesize this disparity is due to overfitting, as the classifier has 30 000
parameters, and D3 is rather small (please see Appendix C for a discussion of the generalization
performance of our approach). However, except for the lowest targeted churn, the actual classifier
churn (blue dashed curves) is substantially lower than the targeted churn. Compared to the thresholded
SVM baseline, our approach significantly reduces churn without paying an accuracy cost.
8
References
[1] K. Ball. An elementary introduction to modern convex geometry. Flavors of Geometry, 31:
1?58, 1997.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. JMLR, 3:463?482, 2002.
[3] D. Biddle. Adverse Impact and Test Validation: A Practitioner?s Guide to Valid and Defensible
Employment Testing. Gower, 2005.
[4] R. G. Bland, D. Goldfarb, and M. J. Todd. Feature article?the ellipsoid method: A survey.
Operations Research, 29(6):1039?1091, November 1981.
[5] S. Boyd and L. Vandenberghe. Localization and cutting-plane methods, April 2011. Stanford
EE 364b lecture notes.
[6] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions
on Intelligent Systems and Technology, 2:27:1?27:27, 2011. Software available at http:
//www.csie.ntu.edu.tw/~cjlin/libsvm.
[7] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In ICML,
2006.
[8] A. Cotter, S. Shalev-Shwartz, and N. Srebro. Learning optimally sparse support vector machines.
In ICML, pages 266?274, 2013.
[9] M. Davenport, R. G. Baraniuk, and C. D. Scott. Tuning support vector machines for minimax
and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2010.
[10] E. E. Eban, M. Schain, A. Gordon, R. A. Saurous, and G. Elidan. Large-scale learning with
global non-decomposable objectives, 2016. URL https://arxiv.org/abs/1608.04802.
[11] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for
large linear classification. JMLR, 9:1871?1874, 2008.
[12] M. M. Fard, Q. Cormier, K. Canini, and M. Gupta. Launch and iterate: Reducing prediction
churn. In NIPS, 2016.
[13] G. Gasso, A. Pappaionannou, M. Spivak, and L. Bottou. Batch and online learning algorithms
for nonconvex Neyman-Pearson classification. ACM Transactions on Intelligent Systems and
Technology, 2011.
[14] B. Gr?nbaum. Partitions of mass-distributions and convex bodies by hyperplanes. Pacific
Journal of Mathematics, 10(4):1257?1261, December 1960.
[15] M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In NIPS,
2016.
[16] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005.
[17] G. S. Mann and A. McCallum. Simple, robust, scalable semi-supervised learning with expectation regularization. In ICML, 2007.
[18] K. Miettinen. Nonlinear multiobjective optimization, volume 12. Springer Science & Business
Media, 2012.
[19] H. Narasimhan, P. Kar, and P. Jain. Optimizing non-decomposable performance measures: a
tale of two classes. In ICML, 2015.
[20] A. Nemirovski. Lecture notes: Efficient methods in convex programming. 1994. URL
http://www2.isye.gatech.edu/~nemirovs/Lect_EMCO.pdf.
[21] L. Rademacher. Approximating the centroid is hard. In SoCG, pages 302?305, 2007.
[22] R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 2:
21?42, 2000.
[23] C. D. Scott and R. D. Nowak. A Neyman-Pearson approach to statistical learning. IEEE
Transactions on Information Theory, 2005.
[24] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss. JMLR, 14(1):567?599, Feb. 2013.
[25] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal Estimated sub-GrAdient
SOlver for SVM. Mathematical Programming, 127(1):3?30, March 2011.
[26] M. S. Vuolo and N. B. Levy. Disparate impact doctrine in fair housing. New York Law Journal,
2013.
[27] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness constraints: A mechanism
for fair classification. In ICML Workshop on Fairness, Accountability, and Transparency in
Machine Learning, 2015.
9
| 6316 |@word version:1 proportion:5 d2:9 seek:3 covariance:1 hsieh:1 sgd:1 liblinear:2 initial:4 series:1 score:1 contains:3 disparity:1 outperforms:1 current:2 com:3 must:3 written:1 fn:3 partition:1 informative:1 enables:1 hypothesize:1 plot:2 maxv:2 v:3 intelligence:1 selected:1 plane:7 mccallum:2 ith:1 math:1 hyperplanes:1 org:1 simpler:1 zhang:1 unbounded:1 mathematical:1 predecessor:1 become:1 incorrect:2 inside:2 manner:3 notably:1 indeed:1 amphitheatre:1 expected:6 themselves:1 roughly:2 multi:1 actual:1 considering:1 solver:3 spain:1 provided:1 notation:4 bounded:2 underlying:1 mass:1 medium:1 lowest:1 what:1 mountain:1 minimizes:1 substantially:1 narasimhan:2 finding:4 magnified:1 sinz:1 guarantee:1 every:7 concave:4 tackle:1 exactly:2 classifier:27 rm:3 demonstrates:1 control:3 partitioning:1 saurous:1 enjoy:1 positive:28 dropped:1 multiobjective:1 todd:1 approximately:5 might:3 plus:2 chose:1 accountability:1 studied:1 dynamically:1 challenging:1 nemirovski:1 range:3 statistically:1 averaged:2 testing:12 practice:2 union:1 implement:2 differs:2 procedure:4 sdca:1 empirical:4 implementation3:1 significantly:3 printed:1 fard:2 boyd:1 confidence:1 word:2 subpopulation:2 get:1 cannot:2 unlabeled:10 close:2 pegasos:1 put:1 risk:3 optimize:6 equivalent:4 imposed:1 customer:1 deterministic:3 maximizing:1 www:2 straightforward:1 starting:3 l:2 convex:25 survey:1 stabilizing:1 simplicity:1 decomposable:2 rule:5 insight:1 vandenberghe:1 stability:2 population:1 handle:1 coordinate:1 imagine:1 deploy:3 play:1 user:5 target:5 programming:2 us:2 element:1 satisfying:1 expensive:2 particularly:3 cut:3 predicts:1 labeled:8 v6t:1 role:1 subproblem:1 observed:3 csie:2 wang:1 worst:1 trade:1 mentioned:2 expecting:1 convexity:4 complexity:1 skeleton:2 inductively:1 employment:1 trained:1 tight:10 solving:1 upon:3 localization:1 differently:2 represented:1 various:1 regularizer:2 surrounding:1 train:3 jain:1 effective:1 equalized:1 labeling:2 hyper:1 choosing:2 pearson:6 outside:1 shalev:3 doctrine:1 heuristic:1 supplementary:1 larger:2 solve:1 say:1 ramp:15 otherwise:3 tightness:1 stanford:1 favor:1 transform:1 jointly:1 online:3 housing:2 sequence:2 advantage:1 propose:8 causing:2 loop:1 achieve:2 scalability:1 convergence:1 requirement:2 rademacher:2 leave:1 andrew:1 tale:1 b0:28 progress:1 paying:1 coverage:6 c:1 predicted:1 trading:2 launch:1 quantify:1 differ:1 safe:1 correct:3 discontinuous:2 attribute:1 stochastic:4 libsvmtools:1 material:1 mann:2 education:1 require:4 fix:1 generalization:1 ntu:2 elementary:1 correction:1 around:1 considered:3 ground:1 substituting:1 optimizer:1 early:1 achieves:2 purpose:2 outperformed:1 applicable:1 label:6 currently:2 create:1 successfully:1 trusted:1 cotter:3 minimization:4 offs:1 always:1 gaussian:1 rather:3 avoid:1 shelf:1 defensible:1 gatech:1 l0:3 derived:1 joachim:2 notational:1 nemirovs:1 a9a:1 contrast:1 centroid:1 baseline:6 sense:1 dependent:1 bt:13 classification:18 dual:4 ill:1 html:1 constrained:2 special:1 initialize:1 uc:1 equal:3 once:2 construct:2 field:1 sampling:1 broad:1 icml:6 fairness:22 minimized:1 intelligent:2 gordon:1 modern:1 simultaneously:1 individual:3 intended:1 consisting:1 geometry:2 ab:1 highly:1 adjust:1 generically:1 male:1 extreme:1 yielding:1 primal:3 egregious:5 held:3 accurate:3 closer:1 helper:1 nowak:1 minw:1 goh:1 plotted:1 desired:7 re:2 instance:1 industry:1 tp:5 disadvantage:1 measuring:1 cost:2 gr:1 too:4 optimally:1 combined:1 adaptively:1 thanks:1 randomized:8 off:2 michael:1 concrete:1 containing:1 choose:1 possibly:1 davenport:2 worse:2 cuttingplane:1 return:4 coefficient:1 inc:1 matter:1 satisfy:4 rockafellar:1 race:1 depends:1 collobert:2 performed:1 view:1 cormier:1 red:7 start:1 option:1 majorization:4 minimize:6 contribution:1 publicly:1 accuracy:6 who:1 characteristic:1 efficiently:1 yield:5 identify:1 upperbounding:1 reserved:1 accurately:1 churn:30 classified:3 distort:1 evaluates:2 dm:5 di:6 degeneracy:1 sampled:3 stop:1 dataset:19 hardt:3 recall:13 ut:7 routine:1 originally:1 supervised:2 specify:1 improved:1 april:1 formulation:3 evaluated:1 shrink:1 just:1 hand:1 horizontal:1 nonlinear:1 reweighting:1 google:4 rodriguez:1 mayagupta:1 perhaps:1 verify:1 true:3 unbiased:2 multiplier:3 former:1 regularization:3 hence:1 equality:1 iteratively:1 goldfarb:1 ll:1 during:3 please:2 davis:2 suboptimality:1 criterion:1 pdf:1 demonstrate:1 julia:1 tn:2 interface:1 meaning:1 common:3 volume:1 jl:1 discussed:1 interpretation:1 kwk2:4 imposing:1 rd:4 unconstrained:5 grid:1 mathematics:2 z4:1 pm:3 tuning:1 dot:3 reachable:1 add:2 base:1 feb:1 multivariate:1 imbalanced:1 own:1 female:1 optimizing:8 optimizes:2 certain:4 nonconvex:1 kar:1 binary:2 meeting:2 additional:2 care:1 determine:2 maximize:2 elidan:1 dashed:3 semi:1 ii:4 multiple:4 u0:3 reduces:2 transparency:1 exceeds:1 match:2 compensate:1 lin:2 bland:1 equally:1 gummadi:1 ensuring:1 prediction:15 variant:1 impact:2 scalable:1 essentially:1 metric:6 df:3 expectation:1 arxiv:1 iteration:5 kernel:1 achieved:2 proposal:3 receive:1 want:1 whereas:2 addition:1 country:1 biased:2 ascent:1 subject:4 tend:1 db:2 biddle:1 december:1 effectiveness:1 linearizes:1 call:1 ee:1 structural:1 practitioner:2 near:1 odds:1 ideal:1 constraining:3 easy:3 iii:2 presence:1 split:1 iterate:1 fit:1 suboptimal:1 inner:2 idea:1 scalarization:3 whether:2 expression:1 handled:2 six:1 bartlett:1 ultimate:1 url:2 effort:1 penalty:1 york:1 cause:2 repeatedly:1 proprietary:2 gabriel:1 clear:1 involve:1 repeating:1 http:5 specifies:1 notice:4 dotted:2 estimated:1 correctly:2 blue:6 coarsely:1 group:3 key:3 threshold:2 drawn:1 d3:6 penalizing:1 libsvm:2 thresholded:7 ht:4 relaxation:1 run:2 parameterized:2 baraniuk:1 arrive:1 draw:1 decision:5 appendix:11 bound:25 maya:1 distinguish:1 datum:1 fan:1 nonnegative:1 www2:1 constraint:37 precisely:1 software:1 aspect:1 argument:1 min:4 extremely:1 attempting:1 performing:1 relatively:1 speedup:1 pacific:1 alternate:1 combination:5 ball:1 march:1 belonging:1 smaller:1 slightly:3 shallow:1 evolves:1 making:3 b:1 tw:2 census:1 socg:1 unregularized:1 neyman:6 equation:2 previously:1 discus:2 cjlin:2 mechanism:1 singer:1 demographic:1 adopted:1 available:2 operation:1 observe:1 differentiated:1 indirectly:1 alternative:1 batch:1 rp:4 original:1 remaining:1 include:2 ensure:2 opportunity:3 hinge:8 upperbounds:1 gower:1 especially:1 establish:1 approximating:1 objective:10 quantity:2 dependence:2 gradient:3 win:1 spivak:1 mapped:1 miettinen:1 outer:4 w0:23 evaluate:2 fresh:1 embarrassing:1 index:1 eban:2 relationship:1 ratio:5 minimizing:4 ellipsoid:1 difficult:1 potentially:1 subproblems:1 negative:14 disparate:1 design:1 implementation:3 perform:2 disagree:2 upper:17 observation:1 vertical:2 datasets:18 benchmark:2 finite:1 november:1 incorrectly:1 canini:1 discovered:1 rn:3 drift:1 pair:1 required:1 specified:4 optimized:1 catalog:1 learned:2 ucdavis:1 barcelona:1 subgroup:2 nip:3 address:2 adult:1 pattern:1 scott:2 fp:2 including:2 max:5 green:4 analogue:1 shifting:2 misclassification:1 power:1 difficulty:1 business:1 regularized:2 predicting:1 indicator:1 valera:1 minimax:1 github:1 technology:2 library:2 axis:3 gasso:4 categorical:1 columbia:1 sn:14 deviate:1 friedlander:1 vancouver:1 determining:2 relative:2 law:2 loss:12 expect:1 lecture:2 srebro:3 validation:1 proxy:1 article:1 thresholding:1 summary:1 surprisingly:1 last:1 parity:1 majorizationminimization:1 infeasible:1 jth:1 bias:4 guide:1 taking:1 sparse:2 uzawa:1 boundary:2 calculated:1 default:1 world:10 evaluating:1 curve:8 valid:1 made:1 commonly:1 far:2 income:1 uryasev:1 transaction:4 compact:1 cutting:7 status:1 global:1 sequentially:1 active:1 overfitting:1 parkway:1 b1:1 xi:8 shwartz:3 freshly:1 continuous:1 search:1 table:6 learn:1 robust:1 ca:3 improving:1 investigated:1 bottou:2 zafar:8 constructing:2 da:2 vj:3 sp:23 pk:7 main:1 fair:3 body:1 augmented:1 deployed:12 precision:4 sub:1 theme:1 wish:3 deterministically:2 candidate:9 isye:1 unfair:1 jmlr:3 levy:1 hw:7 dozen:1 british:1 bad:2 dk:1 svm:12 gupta:2 mendelson:1 workshop:1 false:4 adding:1 magnitude:1 budget:1 easier:2 flavor:1 suited:1 entropy:1 lt:7 simply:2 saddle:2 lagrange:2 expressed:5 desire:1 chang:2 acquiring:1 gender:2 ubc:1 truth:1 nested:1 corresponds:1 acm:2 springer:1 weston:1 conditional:1 goal:14 targeted:4 quantifying:1 price:1 feasible:5 change:2 hard:2 loan:1 typical:2 except:1 reducing:2 adverse:1 secondary:1 hw0:2 people:1 support:5 latter:1 arises:1 dept:2 d1:7 handling:4 |
5,877 | 6,317 | ?Congruent? and ?Opposite? Neurons: Sisters for
Multisensory Integration and Segregation
Wen-Hao Zhang1,2 ? , He Wang1 , K. Y. Michael Wong1 , Si Wu2
[email protected], [email protected], [email protected], [email protected]
1
Department of Physics, Hong Kong University of Science and Technology, Hong Kong.
2
State Key Lab of Cognitive Neuroscience and Learning, and
IDG/McGovern Institute for Brain Research, Beijing Normal University, China.
Abstract
Experiments reveal that in the dorsal medial superior temporal (MSTd) and the
ventral intraparietal (VIP) areas, where visual and vestibular cues are integrated
to infer heading direction, there are two types of neurons with roughly the same
number. One is ?congruent" cells, whose preferred heading directions are similar in
response to visual and vestibular cues; and the other is ?opposite" cells, whose preferred heading directions are nearly ?opposite" (with an offset of 180? ) in response
to visual vs. vestibular cues. Congruent neurons are known to be responsible for
cue integration, but the computational role of opposite neurons remains largely
unknown. Here, we propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build
a computational model composed of two reciprocally coupled modules, MSTd
and VIP, and each module consists of groups of congruent and opposite neurons.
In the model, congruent neurons in two modules are reciprocally connected with
each other in the congruent manner, whereas opposite neurons are reciprocally
connected in the opposite manner. Mimicking the experimental protocol, our model
reproduces the characteristics of congruent and opposite neurons, and demonstrates
that in each module, the sisters of congruent and opposite neurons can jointly
achieve optimal multisensory information integration and segregation. This study
sheds light on our understanding of how the brain implements optimal multisensory
integration and segregation concurrently in a distributed manner.
1
Introduction
Our brain perceives the external world with multiple sensory modalities, including vision, audition,
olfaction, tactile, vestibular perception and so on. These sensory systems extract information
about the environment via different physical means, and they generate complementary cues (neural
representations) about external objects to the multisensory areas. Over the past years, a large volume
of experimental and theoretical studies have focused on investigating how the brain integrates multiple
sensory cues originated from the same object in order to perceive the object reliably in an ambiguous
environment, the so-called multisensory integration. They found that the brain can integrate multiple
cues optimally in a manner close to Bayesian inference, e.g., integrating visual and vestibular cues to
infer heading direction [1] and so on [2?4]. Neural circuit models underlying optimal multisensory
integration have been proposed, including a centralized model in which a dedicated processor receives
and integrates all sensory cues [5, 6], and a decentralized model in which multiple local processors
exchange cue information via reciprocal connections, so that optimal cue integration is achieved at
each local processor [7].
?
Current address: Center for the Neural Basis of Cognition, Carnegie Mellon University.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Congruent neuron
40
30
Vestibular
Visual
C
80
90? 60
?90?
20
?180?
10
0
?180
Opposite neuron
B
0?
Number of Neurons
Firing rate (spikes s?1)
A
40
20
?90
0
90
Heading direction (?)
180
0
?180
?90
0
90
Heading direction (?)
180
Visual vs. vestibular
Congruent
Opposite
50
40
30
20
10
0
0
60
90
120
180
| ? Preferred direction (?)|
Figure 1: Congruent and opposite neurons in MSTd. Similar results were found in VIP [12]. (A-B)
Tuning curves of a congruent neuron (A) and an opposite neuron (B). The preferred visual and
vestibular directions are similar in (A) but are nearly opposite by 180? in (B). (C) The histogram of
neurons according to their difference between preferred visual and vestibular directions. Congruent
and opposite neurons are comparable in numbers. (A-B) adapted from [1], (C) from [13].
However, multisensory integration is only half of the story of multisensory information processing,
which works well when the sensory cues are originated from the same object. In cases where the
sensory cues originate from different objects, the brain should segregate, rather than integrate, the
cues. In a noisy environment, however, the brain is unable to differentiate the two situations at
first sight. The brain faces a ?chicken vs. egg" dilemma in multisensory integration: without first
integrating multiple cues to eliminate uncertainty, the brain is unable to estimate the objects reliably to
differentiate whether the cues are from the same or different objects; but once the cues are integrated,
the disparity information between the cues is lost, and the brain can no longer discriminate objects
clearly when the cues actually come from different objects. To solve this dilemma, here we argue that
the brain needs to carry out multisensory integration and segregation concurrently in the early stage
of information processing, that is, a group of neurons integrates sensory cues while another group of
neurons extracts the cue disparity information, and the interplay between two networks determines
the final action: integration vs. segregation. Concurrent processing has the advantage of achieving
rapid object perception if the cues are indeed from the same object, and avoiding information loss if
the cues are from different objects. Psychophysical data tends to support this idea, which shows that
the brain can still sense the difference between cues in multisensory integration [8, 9].
What are the neural substrates of the brain to implement concurrent multisensory integration and
segregation? In the experiments of integrating visual and vestibular cues to infer heading direction, it
was found that in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area
(VIP) which primarily receive visual and vestibular cues respectively, there exist two types of neurons
displaying different cue integrative behaviors [1, 10]. One of them is called ?congruent" cells, since
their preferred heading directions are similar in response to either a visual or a vestibular cue (Fig. 1A);
and the other type is called ?opposite" cells, since their preferred visual and vestibular directions are
nearly ?opposite" (with an offset of 180? , half of the period of direction, Fig. 1B). Data analyses and
modelling studies revealed that congruent neurons are responsible for cue integration [1, 10, 6, 7].
However, the computational role of opposite neurons remains largely unknown, despite the fact that
congruent and opposite neurons are comparably numerous in MSTd and VIP (Fig. 1C). Notably,
the responses of opposite neurons hardly vary when a single cue is replaced by two congruent cues
(i.e., no cue integration behavior), whereas their responses increase significantly when the disparity
between visual and vestibular cues increases [11], indicating that opposite neurons may serve to
extract the cue disparity information necessary for multisensory segregation. Motivated by the above
experimental findings, we explore how multisensory integration and segregation are concurrently
implemented in a neural system via sisters of congruent and opposite cells.
2
Probabilistic Model of Multisensory Information Processing
In reality, because of noise, the brain estimates stimulus information relying on ambiguous cues in a
probabilistic manner. Thus, we formulate multisensory information processing in the framework of
probabilistic inference. The present study mainly focuses on information processing at MSTd and
VIP, where visual and vestibular cues are integrated/segregated to infer heading direction. However,
the main results of this work are applicable to the processing of cues of other modalities.
2
2.1
The von Mises distribution for circular variables
Because heading direction is a circular variable whose values are in range (??, ?], we adopt the
von Mises distribution [14] (Supplementary Information Sec. 1). Compared with the Gaussian
distribution, the von Mises distribution is more suitable and also more accurate to describe the
probabilistic inference of circular variables, and furthermore, it gives a clear geometrical interpretation
of multisensory information processing (see below).
Suppose there are two stimuli s1 and s2 , each of which generates a sensory cue xm , for m = 1, 2
(visual or vestibular), independently. We call xm the direct cue of sm , and xl (l 6= m) the indirect
cue to sm . Denote as p(xm |sm ) the likelihood function, whose form in von Mises distribution is
p(xm |sm )
=
1
exp [?m cos(xm ? sm )] ? M(xm ? sm , ?m ),
2?I0 (?m )
(1)
R 2?
where I0 (?) = (2?)?1 0 e? cos(?) d? is the modified Bessel function of the first kind and order zero.
sm is the mean of the von Mises distribution, i.e., the mean value of xm . ?m is a positive number
characterizing the concentration of the distribution, which is analogous to the inverse of the variance
(? ?2 ) of Gaussian distribution. In the limit of large ?m , a von Mises distribution M[xm ? sm , ?m ]
approaches to a Gaussian distribution N [xm ? sm , ??1
m ] (SI Sec. 1.2). For small ?m , the von Mises
distribution deviates from the Gaussian one (Fig.2A).
2.2
Multisensory integration
We introduce first a probabilistic model of Bayes-optimal multisensory integration. Experimental data
revealed that our brain integrates sensory cues optimally in a manner close to Bayesian inference [2].
Assuming that noises in different channels are independent, the posterior distribution of two stimuli
can be written according to Bayes? theorem as
p(s1 , s2 |x1 , x2 ) ? p(x1 |s1 )p(x2 |s2 )p(s1 , s2 ),
(2)
where p(s1 , s2 ) is the prior of the stimuli, which specifies the concurrence probability of a stimulus
pair. As an example in the present study, we choose the prior to be
1
1
p(s1 , s2 ) =
M(s1 ? s2 , ?s ) =
exp [?s cos(s1 ? s2 )] .
(3)
2?
(2?)2 I0 (?s )
This form of prior favors the tendency for two stimuli to have similar values. Such a tendency has
been modeled in multisensory integration [7, 15?17]. ?s determines the correlation between two
stimuli, i.e., how informative one cue is about the other, and it regulates the extent to which two cues
should be integrated. The fully integrated case, in which the prior becomes a delta function in the
limit ?s ? ?, has been modeled in e.g., [4, 5].
Since the results for two stimuli are exchangeable, hereafter, we will only present the result for
s1 , unless stated specifically. Noting that p(sm ) = p(xm ) = 1/2? are uniform distributions, the
posterior distribution of s1 given two cues becomes
Z
p(s1 |x1 , x2 ) ? p(x1 |s1 ) p(x2 |s2 )p(s2 |s1 )ds2 ? p(s1 |x1 )p(s1 |x2 ).
(4)
The indirect cue x2 is informative to s1 via the prior p(s1 , s2 ). By using Eqs. (1,3) and under
reasonable approximations (SI Sec. 1.4), we obtain
Z
p(s1 |x2 ) ? p(x2 |s2 )p(s2 |s1 )ds2 ' M (s1 ? x2 , ?12 ) ,
(5)
where A(?12 ) = A(?2 )A(?s ) with A(?) ?
R?
??
cos(?)e? cos ? d?/
R?
??
e? cos ? d?.
Finally, utilizing Eqs. (1,5), Eq. (4) is written as
p(s1 |x1 , x2 ) ?
M(s1 ? x1 , ?1 )M(s1 ? x2 , ?12 ) = M(s1 ? s?1 , ?
? 1 ),
(6)
where the mean and concentration of the posterior given two cues are (SI Sec. 1.3)
s?1
?
?1
=
atan2(?1 sin x1 + ?12 sin x2 , ?1 cos x1 + ?12 cos x2 ),
2
1/2
= ?1 + ?212 + 2?1 ?12 cos(x1 ? x2 )
,
3
(7)
(8)
where atan2 is the arctangent function of two arguments (SI Eq. S17).
Eqs. (7,8) are the results of Bayesian integration in the form of von Mises distribution, and they are
the criteria for us to judge whether optimal cue integration is achieved in a neural system.
To understand these optimality criteria intuitively, it is helpful to see their equivalence of the Gaussian
distribution in the limit of large ?1 , ?2 and ?s . Under the condition x1 ? x2 , Eq. (8) is approximated
to be ?
? 1 ? ?1 + ?12 (SI Sec. 2). Since ? ? 1/? 2 when von Mises distribution is approximated as
2
Gaussian one, Eq. (8) becomes 1/?
?12 ? 1/?12 + 1/?12
, which is the Bayesian prediction on Gaussian
variance conventionally used in the literature [4]. Similarly, Eq. (7) is associated with the Bayesian
prediction on the Gaussian mean [4].
2.3
Multisensory segregation
We introduce next the probabilistic model of multisensory segregation. Inspired by the observation
in multisensory integration that the posterior of a stimulus given combined cues is the product of
the posteriors given each cue (Eq.4), we propose that in multisensory segregation, the disparity
D(s1 |x1 ; s1 |x2 ) between two cues is measured by the ratio of the posteriors given each cue, that is,
D(s1 |x1 ; s1 |x2 ) ? p(s1 |x1 )/p(s1 |x2 ),
(9)
By taking the expectation of log D over the distribution p(s1 |x1 ), we get the Kullback-Leibler
divergence between the two posteriors given each cue. This disparity measure was also used to
discriminate alternative moving directions in [18].
Interestingly, by utilizing the property of von Mises distributions and the condition cos(s1 +? ?x2 ) =
? cos(s1 ? x2 ), Eq. (9) can be rewritten as
D(s1 |x1 ; s1 |x2 ) ? p(s1 |x1 )p(s1 + ?|x2 ),
(10)
that is, the disparity information between two cues is proportional to the product of the posterior
given the direct cue and the posterior given the indirect cue but with the stimulus value shifted by ?.
By utilizing Eqs. (1,5), we obtain
D(s1 |x1 ; s1 |x2 ) ? M(s1 ? x1 , ?1 )M(s1 + ? ? x2 , ?12 ) = M (s1 ? ??
s1 , ??
?1 ) ,
(11)
where the mean and concentration of the von Mises distribution are
??
s1
??
?1
=
atan2(?1 sin x1 ? ?12 sin x2 , ?1 cos x1 ? ?12 cos x2 ),
2
1/2
= ?1 + ?212 ? 2?1 ?12 cos(x1 ? x2 )
.
(12)
(13)
The above equations are the criteria for us to judge whether the disparity information between two
cues is optimally encoded in a neural system.
3
Geometrical Interpretation of Multisensory Information Processing
A benefit of using the von Mises distribution is that it gives us a clear geometrical interpretation of
multisensory information processing. A von Mises distribution M (s ? x, ?) can be interpreted as
a vector in a two-dimensional space with its mean x and concentration ? representing respectively
the angle and length of the vector (Fig. 2B-C). This fits well with the circular property of heading
direction. When the posterior of a stimulus is interpreted as a vector, the vector length represents the
confidence of inference. Interestingly, under such a geometrical interpretation, the product of two von
Mises distributions equals summation of their corresponding vectors, and the ratio of two von Mises
distributions equals subtraction of the two vectors. Thus, from Eq. (4), we see that multisensory
integration is equivalent to vector summation, with each vector representing the posterior of the
stimulus given a single cue, and from Eq. (9), multisensory segregation is equivalent to vector
subtraction (see Fig. 2D).
Overall, multisensory integration and segregation transform the original two vectors, the posteriors
given each cue, into two new vectors, the posterior given combined cues and the disparity between
the two cues. The original two vectors can be recovered from their linear combinations. Hence,
there is no information loss. The geometrical interpretation also helps us to understand multisensory
information processing intuitively. For instance, if two vectors have a small intersection angle, i.e., the
4
A
0.012
von Mises distribution
(polar coordinate)
C Geometric representation
of a von Mises distribution
90?
k=1
k=3
0.008
k
Probability density
B
von Mises M(x,k)
Gaussian N(x,k-1)
180?
0.005
0
0.01
0
?180
x
0?
0.004
L
0.015
?90
D
0
x
90
180
270?
Geometric representation of integration and segregation
p(s1|x1,x2)
)
|x 2
p(s 1
-1
)
|x 2
p(s 1
0
p(s1|x1)
D(s1|x1;s1|x2)
L
D(s1|x1;s1|x2)
Figure 2: Geometrical interpretation of multisensory information processing in von Mises distribution.
(A) The difference between von Mises and Gaussian distributions. For large concentration ?, their
difference becomes small. (B) A von Mises distribution in the polar coordinate. (C) A von Mises
distribution M (s ? x, ?) can be represented as a vector in a 2D space with its angle given by
x and length by ?. (D) Geometrical interpretations of multisensory integration and segregation.
The posteriors of s1 given each cue are represented by two vectors (blue). Inverse of a posterior
corresponds to rotating it by 180? . Multisensory integration corresponds to the summation of two
vectors (green), and multisensory segregation the subtraction of two vectors (red).
posteriors given each cue tend to support each other, the length of summed vector is long, implying
that the posterior of cue integration has strong confidence; and the length of subtracting vector is short,
implying that the disparity between two cues is small. If the two vectors have a large intersection
angle, the interpretation becomes the opposite.
4
4.1
Neural Implementation of Multisensory Information Processing
The model Structure
We adopt a decentralized architecture to model multisensory information processing in the brain [7,
19]. Compared with the centralized architecture in which a dedicated processor carries out all
computations, the decentralized architecture considers a number of local processors communicating
with each other via reciprocal connections, so that optimal information processing is achieved at
each local processor distributively [7]. This architecture was supported by a number of experimental
findings, including the involvement of multiple, rather than a single, brain areas in visual-vestibular
integration [1, 10], the existence of intensive reciprocal connections between MTSd and VIP [20, 21],
and the robustness of multisensory integration against the inactivation of a single module [22]. In a
previous work [7], Zhang et al. studied a decentralized model for multisensory integration at MSTd
and VIP, and demonstrated that optimal integration can be achieved at both areas simultaneously,
agreeing with the experimental data. In their model, MSTd and VIP are congruently connected, i.e.,
neurons in one module are strongly connected to those having the similar preferred heading directions
in the other module. This congruent connection pattern naturally gives rise to congruent neurons.
Since the number of opposite neurons is comparable with that of congruent neurons in MSTd and VIP,
it is plausible that they also have a computational role. It is instructive to compare the probabilistic
models of multisensory integration and segregation, i.e., Eqs. (4) and (10). They have the same form,
except that in segregation the stimulus value in the posterior given the indirect cue is shifted by ?.
Furthermore, since congruent reciprocal connections lead to congruent neurons, we hypothesize that
opposite neurons are due to opposite reciprocal connections, and their computational role is to encode
the disparity information between two cues. The decentralized model for concurrent multisensory
integration and segregation in MSTd and VIP is shown in Fig.3.
5
B
Module 2
(VIP)
90?
C
?
270?
Congruent
Opposite
Excitatory connection
Inhibitory connection
Cue 1
Wr(?,?'), Wc(?,?'), Wo(?,?')
0
?180
0?
Inhibitory pool
Cue 2
0
180
???'
Reliability (concentration
of net?s estimate)
180?
?
Module 1
(MSTd)
Connection strength
W(?,?')
A
6
x103
3
0
0
10
20
30
Peak firing rate (Hz)
Figure 3: The model structure. (A) The model is composed of two modules, representing MSTd and
VIP respectively. Each module receives the direct cue via feedforward input. In each module, there
are two nets of excitatory neurons, each connected recurrently. Net c (blue) consists of congruent
neurons. Congruent neurons between modules are connected reciprocally in the congruent manner
(blue lines). On the other hand, net o (red) consists of opposite neurons, and opposite neurons between
modules are connected in the opposite manner (brown lines). Moreover, to implement competition
between information integration and segregation, all neurons in a module are connected to a common
inhibitory neuron pool (purple, only shown in module 1). (B) The recurrent, congruent, and opposite
connection patterns between neurons. (C) Network?s peak firing rate reflects its estimation reliability.
4.2
The model dynamics
Denote as um,n (?) and rm,n (?) respectively the synaptic input and firing rate of a n-type neuron in
module m whose preferred heading direction with respect to the direct cue m is ?. n = c, o represents
the congruent and opposite cells respectively, and m = 1, 2 represents respectively MSTd and VIP.
For simplicity, we assume that the two modules are symmetric, and only present the dynamics of
module 1.
The dynamics of a congruent neuron in module 1 is given by
?
?
?
X
X
?u1,c (?, t)
= ?u1,c (?, t) +
Wr (?, ?0 )r1,c (?0 , t) +
Wc (?, ?0 )r2,c (?0 , t) + I1,c (?, t), (14)
?t
0
0
? =??
? =??
where I1,c (?, t) is the feedforward input to the neuron. Wr (?, ?0 ) is
between
?the recurrent
connection
0
?1
0 2
neurons in the same module, which is set to be Wr (?, ? ) = Jr ( 2?a) exp ?(? ? ? ) /(2a2 )
with periodic condition imposed, where a controls the tuning width of the congruent neurons.
Wc (?, ?0 ) is the ?
reciprocal connection
between congruent
cells in two modules, which is set to be
Wc (?, ?0 ) = Jc ( 2?a)?1 exp ?(? ? ?0 )2 /(2a2 ) . The reciprocal connection strength Jc controls
the extent to which cues are integrated between modules and is associated with the correlation
parameter ?s in the stimulus prior (see SI Sec. 3.3).
The dynamics of an opposite neuron in module 1 is given by
?
?
?
X
X
?u1,o (?, t)
= ?u1,o (?, t) +
Wr (?, ?0 )r1,o (?0 , t) +
Wo (?, ?0 )r2,o (?0 , t) + I1,o (?, t). (15)
?t
0
0
? =??
? =??
It has the same form as that?of a congruent neuron except that the pattern
of reciprocal connections are
given by Wo (?, ?0 ) = Jc ( 2?a)?1 exp ?(? + ? ? ?0 )2 /(2a2 ) = Wc (? + ?, ?0 ), that is, opposite
neurons between modules are oppositely connected by an offset of ?. We choose the strength and
width of the connection pattern Wo to be the same as that of Wc . This is based on the finding that
the tuning functions of congruent and opposite neurons have similar tuning width and strength [12].
Note that all connections are imposed with periodic conditions.
In the model, we include the effect of inhibitory neurons through a divisive normalization to the
responses of excitatory neurons [23], given by
1
2
r1,n (?, t) =
[u1,n (?, t)]+ ,
(16)
Du
6
Tuning curves
Cue 1
Cue 2
10
10
0
90
Cue direction xm
0
?180 ?90
180
(?)
90
180
Cue direction xm
(?)
Bump height
F
Measured mean (?)
18
16
0
90
Congruent
Opposite
D
20
10
10
0
?180 ?90
0
90
0
?180 ?90
180
180
Cue disparity x2-x1 (?)
Congruent (Module 2)
90
R2=0.99
0
0
90
180
Predicted mean of
posterior p(sm|x1,x2) (?)
3
2
R2=0.91
1
2
0
90
0
?180 ?90
180
Neuron index ?
0
90
180
Neuron index ?
Estimates of opposite neurons
I
x103
1
Combined
20
10
Neuron index ?
H
E
Cue 2 (-60?)
Estimates of congruent neurons
G180 Congruent (Module 1)
Congruent
Opposite
20
0
Cue 1 (0?)
20
3 x103
Predicted concentration of
posterior p(sm|x1,x2)
Measured mean (?)
0
?180 ?90
Firing rate (Hz)
Population activities in module 1
C
Opposite neuron
J
Opposite (Module 1)
180 Opposite (Module 2)
90
0
R2=0.99
0
90
180
Predicted mean of
disparity D(sm|xm;sm|x?) (?)
Measured concentration
B 20
Congruent neuron
Firing rate (Hz)
Firing rate (Hz)
20
Measured concentration
A
x103
3
2
1
R2=0.89
3
x103
Predicted concentration of
disparity D(sm|xm;sm|x?)
1
2
Figure 4: Bayes-optimal multisensory integration and segregation with congruent and opposite
neurons. (A-B) Tuning curves of an example congruent neuron and an example opposite neuron in
module 1. The preferred direction of the congruent neuron in response to two single cues are the same
at ?90? , but the preferred direction of the opposite neuron under two single cues are opposite by 180? .
(C-E) The neuronal population activities at module 1 under three cuing conditions: only the direct
cue 1 (C), only the indirect cue 2 (D), and combination of the two cues (E). (F) The activity levels
of the congruent and opposite neuronal networks (measured by the corresponding bump heights)
vs. the cue disparity. (G-H). Comparing the mean and concentration of the stimulus posterior given
two cues estimated by the congruent neuronal network with that predicted by Bayesian inference,
Eqs. (7,8). Each dot is a result obtained under a parameter set. (I-J). Comparing the mean and
concentration of the cue disparity information estimated by the opposite neuronal network with that
? Jc = Jo ? [0.1, 0.5]Jr ,
predicted by probabilistic inference, Eqs. (12,13). Parameters: Jr = 0.4J,
0
?1 = ?2 ? [0.8, 1.6]Um
, Ib = 1, F = 0.5. (G-J) x1 = 0? , x2 ? [0? , 180? ].
P
P?
2
where Du ? 1+? n0 =c,o ?0 =?? [u1,n0 (?0 , t)]+ . [x]+ ? max(x, 0), and the parameter ? controls
the magnitude of divisive normalization.
The feedforward input conveys the direct cue information to a module (e.g., the feedforward input to
MSTd is from area MT which extracts the heading direction from optical flow), which is set to be
p
p
(? ? x1 )2
(? ? x1 )2
I1,n (?, t) = ?1 exp ?
+
F
?
exp
?
?
(?,
t)+I
+
F Ib 1,n (?, t), (17)
1
1
b
4a2
8a2
where ?1 is the signal strength, Ib the mean of background input, and F the Fano factor. ?1 (?, t) and
1,n (?, t) are Gaussian white noises of zero mean with variance satisfying h?m (?, t)?m0 (?0 , t0 )i =
?mm0 ?(???0 )?(t?t0 ), hm,n (?, t)m0 ,n0 (?0 , t0 )i = ?mm0 ?nn0 ?(???0 )?(t?t0 ). The signal-associated
noises ?1 (?, t) to congruent and opposite neurons are exactly the same, while the background noises
1,n (?, t) to congruent and opposite neurons are independent of each other. At the steady state, the
signal drives the network state to center at the cue value x1 , whereas noises induce fluctuations of the
network state. Since we consider multiplicative noise with a constant Fano factor, the signal strength
?m controls the reliability of cue m [5]. The exact form of the feedforward input is not crucial, as
long as it has a uni-modal shape.
4.3
Results
We first verify that our model reproduces the characteristics of congruent and opposite neurons.
Figs. 4A&B show the tuning curves of a congruent and an opposite neuron with respect to either
visual or vestibular cues, which demonstrate that neurons in our model indeed exhibit the congruent
or opposite direction selectivity similar to Fig. 1.
We then investigate the mean population activities of our model under different cuing conditions.
When only cue x1 is applied to module 1, both the congruent and opposite neuronal networks in
7
module 1 receive the feedforward input and generate bumps at x1 (Fig. 4C). When only cue x2
is applied to module 2, the congruent neuronal network at module 1 receives a reciprocal input
and generates a bump at x2 , whereas the opposite neuronal network receives an offset reciprocal
input and generates a bump at x2 + ? (Fig. 4D). For the indirect cue x2 , the neural activities it
induces at module 1 is lower than that induced by the direct cue x1 (Fig. 4C). When both cues are
presented, the congruent neuronal network integrates the feedforward and reciprocal inputs, whereas
the opposite neuronal network computes their disparity by integrating the feedforward inputs and the
offset reciprocal inputs shifted by ? (Fig. 4E). The two networks compete with each other via divisive
normalization. Fig. 4F shows that when the disparity between cues is small, the activity of congruent
neurons is higher than that of opposite neurons. With the increase of cue disparity, the activity of the
congruent neuronal network decreases, whereas the activity of the opposite neurons increases. These
complementary changes in activities of congruent and opposite neurons provide a clue for other parts
of the brain to evaluate whether the cues are from the same or different objects [24].
Finally, to verify whether Bayes-optimal multisensory information processing is achieved in our
model, we check the validity of Eqs. (7-8) for multisensory integration p(sm |x1 , x2 ) by congruent
neurons in module m, and Eqs. (12-13) for multisensory segregation D(sm |xm ; sm |xl ) (l 6= m) by
opposite neurons in module m. Take the verification of the congruent neuronal network in module m
as an example. When a pair of cues are simultaneously applied, the actual mean and concentration
of the networks?s estimates (bump position) are measured through population vector [25] (SI Sec.
4.2). To obtain the Bayesian predictions for the network?s estimate under combined cue condition
(details in SI Sec. 4.3), the mean and concentration of that network?s estimates under either single
cue conditions are also measured, and then are substituted into Eqs. (7-8). Comparisons between
the measured mean and concentration of congruent networks in two modules and the corresponding
theoretical predictions are shown in Fig. 4G&H, indicating an excellent fit, where each dot is the
result under a particular set of parameters. Similarly, comparisons between the measured mean
and concentration of opposite networks and the theoretical predictions (SI Sec. 4.3) are shown in
Fig. 4I&J, indicating opposite neurons indeed implement multisensory segregation.
5
Conclusion and Discussion
Over the past years, multisensory integration has received large attention in modelling studies, but
the equally important issue of multisensory segregation has been rarely explored. The present study
proposes that opposite neurons, which is widely observed at MSTd and VIP, encode the disparity
information between sensory cues. We built a computational model composed of reciprocally
coupled MSTd and VIP, and demonstrated that the characteristics of congruent and opposite cells
naturally emerge from the congruent and opposite connection patterns between modules, respectively.
Using the von Mises distribution, we derived the optimal criteria for integration and segregation
of circular variables and found they have clear geometrical meanings: integration corresponds to
vector summation while segregation corresponds to vector subtraction. We further showed that such a
decentralized system can realize optimal cue integration and segregation at each module distributively.
To our best knowledge, this work is the first modelling study unveiling the functional role of opposite
cells. It has a far-reaching implication on multisensory information processing, that is, the brain
can exploit sisters of congruent and opposite neurons to implement cue integration and segregation
concurrently.
For simplicity, only perfectly congruent or perfectly opposite neurons are considered, but in reality,
there are some portions of neurons whose differences of preferred visual and vestibular heading
directions are in between 0? and 180? (Fig. 1C). We checked that those neurons can arise from adding
noises in the reciprocal connections. As long as the distribution in Fig. 1C is peaked at 0? and 180? ,
the model can implement concurrent integration and segregation. Also, we have only pointed out
that the competition between congruent and opposite neurons provides a clue for the brain to judge
whether the cues are likely to originate from the same or different objects, without exploring how the
brain actually does this. These issues will be investigated in our future work.
Acknowledgments
This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and
National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China
(31261160495).
8
References
[1] Yong Gu, Dora E Angelaki, and Gregory C DeAngelis. Neural correlates of multisensory cue integration
in macaque mstd. Nature Neuroscience, 11(10):1201?1210, 2008.
[2] Marc O Ernst and Heinrich H B?lthoff. Merging the senses into a robust percept. Trends in Cognitive
Sciences, 8(4):162?169, 2004.
[3] David Alais and David Burr. The ventriloquist effect results from near-optimal bimodal integration. Current
Biology, 14(3):257?262, 2004.
[4] Marc O Ernst and Martin S Banks. Humans integrate visual and haptic information in a statistically optimal
fashion. Nature, 415(6870):429?433, 2002.
[5] Wei Ji Ma, Jeffrey M Beck, Peter E Latham, and Alexandre Pouget. Bayesian inference with probabilistic
population codes. Nature Neuroscience, 9(11):1432?1438, 2006.
[6] Tomokazu Ohshiro, Dora E Angelaki, and Gregory C DeAngelis. A normalization model of multisensory
integration. Nature Neuroscience, 14(6):775?782, 2011.
[7] Wen-Hao Zhang, Aihua Chen, Malte J Rasch, and Si Wu. Decentralized multisensory information
integration in neural systems. The Journal of Neuroscience, 36(2):532?547, 2016.
[8] Mark T Wallace, GE Roberson, W David Hairston, Barry E Stein, J William Vaughan, and Jim A Schirillo.
Unifying multisensory signals across time and space. Experimental Brain Research, 158(2):252?258,
2004.
[9] Ahna R Girshick and Martin S Banks. Probabilistic combination of slant information: weighted averaging
and robustness as optimal percepts. Journal of Vision, 9(9):8?8, 2009.
[10] Aihua Chen, Gregory C DeAngelis, and Dora E Angelaki. Functional specializations of the ventral
intraparietal area for multisensory heading discrimination. The Journal of Neuroscience, 33(8):3567?3581,
2013.
[11] Michael L Morgan, Gregory C DeAngelis, and Dora E Angelaki. Multisensory integration in macaque
visual cortex depends on cue reliability. Neuron, 59(4):662?673, 2008.
[12] Aihua Chen, Gregory C DeAngelis, and Dora E Angelaki. Representation of vestibular and visual cues to
self-motion in ventral intraparietal cortex. The Journal of Neuroscience, 31(33):12036?12052, 2011.
[13] Yong Gu, Paul V Watkins, Dora E Angelaki, and Gregory C DeAngelis. Visual and nonvisual contributions
to three-dimensional heading selectivity in the medial superior temporal area. The Journal of Neuroscience,
26(1):73?85, 2006.
[14] Richard F Murray and Yaniv Morgenstern. Cue combination on the circle and the sphere. Journal of vision,
10(11):15?15, 2010.
[15] Jean-Pierre Bresciani, Franziska Dammeier, and Marc O Ernst. Vision and touch are automatically
integrated for the perception of sequences of events. Journal of Vision, 6(5):2, 2006.
[16] Neil W Roach, James Heron, and Paul V McGraw. Resolving multisensory conflict: a strategy for balancing
the costs and benefits of audio-visual integration. Proceedings of the Royal Society of London B: Biological
Sciences, 273(1598):2159?2168, 2006.
[17] Yoshiyuki Sato, Taro Toyoizumi, and Kazuyuki Aihara. Bayesian inference explains perception of unity and
ventriloquism aftereffect: identification of common sources of audiovisual stimuli. Neural Computation,
19(12):3335?3355, 2007.
[18] Mehrdad Jazayeri and J Anthony Movshon. Optimal representation of sensory information by neural
populations. Nature Neuroscience, 9(5):690?696, 2006.
[19] Wen-Hao Zhang and Si Wu. Reciprocally coupled local estimators implement bayesian information
integration distributively. In Advances in Neural Information Processing Systems, pages 19?27, 2013.
[20] Driss Boussaoud, Leslie G Ungerleider, and Robert Desimone. Pathways for motion analysis: cortical
connections of the medial superior temporal and fundus of the superior temporal visual areas in the
macaque. Journal of Comparative Neurology, 296(3):462?495, 1990.
[21] Joan S Baizer, Leslie G Ungerleider, and Robert Desimone. Organization of visual inputs to the inferior
temporal and posterior parietal cortex in macaques. The Journal of Neuroscience, 11(1):168?190, 1991.
[22] Yong Gu, Gregory C DeAngelis, and Dora E Angelaki. Causal links between dorsal medial superior
temporal area neurons and multisensory heading perception. The Journal of Neuroscience, 32(7):2299?
2313, 2012.
[23] Matteo Carandini and David J Heeger. Normalization as a canonical neural computation. Nature Reviews
Neuroscience, 13(1):51?62, 2012.
[24] Tatiana A Engel and Xiao-Jing Wang. Same or different? a neural circuit mechanism of similarity-based
pattern match decision making. The Journal of Neuroscience, 31(19):6982?6996, 2011.
[25] Apostolos P Georgopoulos, Andrew B Schwartz, and Ronald E Kettner. Neuronal population coding of
movement direction. Science, 233(4771):1416?1419, 1986.
9
| 6317 |@word kong:3 integrative:1 carry:2 disparity:21 idg:1 hereafter:1 interestingly:2 past:2 current:2 recovered:1 comparing:2 si:12 ust:3 written:2 realize:1 ronald:1 informative:2 shape:1 hypothesize:1 medial:5 n0:3 v:5 implying:2 cue:110 half:2 discrimination:1 reciprocal:13 short:1 provides:1 zhang:3 height:2 direct:7 apostolos:1 consists:3 pathway:1 burr:1 introduce:2 manner:8 notably:1 indeed:3 rapid:1 roughly:1 behavior:2 wallace:1 brain:22 inspired:1 relying:1 audiovisual:1 automatically:1 actual:1 perceives:1 spain:1 becomes:5 underlying:1 moreover:1 circuit:2 schirillo:1 what:1 kind:1 interpreted:2 morgenstern:1 finding:3 ahna:1 temporal:7 shed:1 exactly:1 um:2 demonstrates:1 rm:1 schwartz:1 exchangeable:1 control:4 grant:1 zhang1:1 positive:1 local:5 tends:1 limit:3 despite:1 firing:7 fluctuation:1 matteo:1 china:3 studied:1 equivalence:1 co:14 range:1 statistically:1 aihua:3 acknowledgment:1 responsible:2 lost:1 implement:7 area:11 significantly:1 confidence:2 integrating:4 induce:1 get:1 close:2 vaughan:1 equivalent:2 imposed:2 demonstrated:2 center:2 attention:1 independently:1 focused:1 formulate:1 simplicity:2 perceive:1 pouget:1 communicating:1 estimator:1 utilizing:3 population:7 nn0:1 coordinate:2 analogous:1 suppose:1 exact:1 substrate:1 roberson:1 trend:1 approximated:2 satisfying:1 observed:1 role:5 module:44 wang:1 connected:9 decrease:1 movement:1 environment:3 heinrich:1 dynamic:4 serve:2 dilemma:2 basis:1 gu:3 mstd:17 indirect:6 represented:2 describe:1 london:1 deangelis:7 mcgovern:1 whose:6 encoded:1 supplementary:1 solve:1 plausible:1 widely:1 jean:1 toyoizumi:1 favor:1 neil:1 jointly:1 noisy:1 transform:1 final:1 differentiate:2 interplay:1 advantage:1 sequence:1 net:4 baizer:1 propose:2 subtracting:1 product:3 ernst:3 achieve:1 competition:2 franziska:1 yaniv:1 r1:3 jing:1 congruent:64 comparative:1 object:14 help:1 recurrent:2 andrew:1 measured:10 received:1 eq:19 heron:1 strong:1 implemented:1 predicted:6 come:1 judge:3 rasch:1 direction:27 human:1 explains:1 exchange:1 biological:1 summation:4 exploring:1 distributively:3 considered:1 normal:1 exp:7 ungerleider:2 cognition:1 bump:6 m0:2 ventral:4 early:1 vary:1 adopt:2 a2:5 polar:2 estimation:1 integrates:5 applicable:1 council:1 concurrent:4 engel:1 reflects:1 weighted:1 concurrently:4 clearly:1 gaussian:11 sight:1 modified:1 rather:2 inactivation:1 reaching:1 encode:3 derived:1 focus:1 modelling:3 likelihood:1 mainly:1 check:1 hk:3 sense:1 wang1:1 helpful:1 inference:9 i0:3 integrated:7 eliminate:1 i1:4 alais:1 mimicking:1 overall:1 issue:2 proposes:1 integration:50 summed:1 equal:2 once:1 having:1 biology:1 represents:3 nearly:3 peaked:1 future:1 stimulus:16 richard:1 primarily:1 wen:3 composed:3 simultaneously:2 divergence:1 national:1 beck:1 replaced:1 jeffrey:1 william:1 olfaction:1 organization:1 centralized:2 circular:5 investigate:1 light:1 sens:1 implication:1 accurate:1 desimone:2 necessary:2 unless:1 rotating:1 circle:1 causal:1 girshick:1 theoretical:3 jazayeri:1 instance:1 leslie:2 cost:1 uniform:1 optimally:3 connect:1 periodic:2 gregory:7 combined:4 density:1 peak:2 probabilistic:10 physic:1 pool:2 michael:2 s17:1 jo:1 von:23 choose:2 cognitive:2 external:2 audition:1 cb846101:1 sec:9 coding:1 jc:4 depends:1 multiplicative:1 lab:1 red:2 portion:1 bayes:4 contribution:1 purple:1 variance:3 largely:2 characteristic:3 atan2:3 percept:2 bayesian:10 identification:1 bresciani:1 comparably:1 bnu:1 drive:1 processor:6 synaptic:1 checked:1 against:1 james:1 conveys:1 naturally:2 associated:3 mi:23 carandini:1 knowledge:1 actually:2 alexandre:1 higher:1 oppositely:1 response:7 modal:1 wei:1 strongly:1 furthermore:2 stage:1 correlation:2 hand:1 receives:4 touch:1 reveal:1 effect:2 validity:1 brown:1 verify:2 hence:1 symmetric:1 leibler:1 white:1 sin:4 width:3 self:1 inferior:1 ambiguous:2 steady:1 hong:3 criterion:4 demonstrate:1 latham:1 dedicated:2 motion:2 geometrical:8 meaning:1 superior:6 common:2 functional:2 mt:1 physical:1 regulates:1 ji:1 volume:1 he:1 interpretation:8 mellon:1 slant:1 tuning:7 similarly:2 fano:2 pointed:1 reliability:4 dot:2 moving:1 longer:1 cortex:3 similarity:1 posterior:22 showed:1 involvement:1 selectivity:2 morgan:1 ventriloquist:1 subtraction:4 period:1 bessel:1 signal:5 barry:1 resolving:1 multiple:6 infer:4 lthoff:1 match:1 long:3 sphere:1 equally:1 prediction:5 basic:1 vision:5 expectation:1 histogram:1 normalization:5 yoshiyuki:1 bimodal:1 achieved:5 cell:9 chicken:1 receive:2 whereas:6 background:2 source:1 modality:2 crucial:1 haptic:1 hz:4 tend:1 induced:1 flow:1 call:1 near:1 noting:1 revealed:2 feedforward:8 fit:2 architecture:4 perfectly:2 opposite:69 idea:1 cn:1 intensive:1 t0:4 whether:6 motivated:1 specialization:1 wo:4 movshon:1 tactile:1 peter:1 hardly:1 action:1 clear:3 stein:1 induces:1 generate:2 specifies:1 exist:1 canonical:1 inhibitory:4 shifted:3 neuroscience:13 intraparietal:4 delta:1 sister:4 wr:5 blue:3 estimated:2 carnegie:1 unveiling:1 group:3 key:1 achieving:1 year:2 beijing:1 compete:1 inverse:2 angle:4 uncertainty:1 reasonable:1 wu:2 decision:1 comparable:2 activity:9 sato:1 adapted:1 strength:6 georgopoulos:1 x2:39 yong:3 generates:3 wc:6 u1:6 argument:1 optimality:1 optical:1 martin:2 department:1 according:2 combination:4 x103:5 jr:3 across:1 agreeing:1 unity:1 making:1 s1:51 aihara:1 intuitively:2 segregation:30 equation:1 remains:2 aftereffect:1 mechanism:1 vip:16 ge:1 decentralized:7 rewritten:1 pierre:1 dora:7 alternative:1 robustness:2 existence:1 original:2 include:1 unifying:1 tatiana:1 exploit:1 cuing:2 build:1 murray:1 society:1 psychophysical:1 spike:1 strategy:1 concentration:16 mehrdad:1 exhibit:1 unable:2 link:1 originate:2 argue:1 extent:2 considers:1 assuming:1 length:5 code:1 modeled:2 index:3 ratio:2 robert:2 hao:3 stated:1 ds2:2 rise:1 implementation:1 reliably:2 unknown:2 neuron:83 observation:1 sm:19 roach:1 parietal:1 situation:1 segregate:1 jim:1 david:4 pair:2 connection:19 conflict:1 barcelona:1 vestibular:20 nip:1 macaque:4 address:1 below:1 perception:5 xm:15 pattern:6 fundus:1 program:1 built:1 including:3 green:1 reciprocally:6 max:1 royal:1 suitable:1 malte:1 natural:1 event:1 representing:3 technology:1 numerous:1 conventionally:1 coupled:3 extract:4 deviate:1 prior:6 understanding:1 literature:1 geometric:2 kazuyuki:1 segregated:1 joan:1 review:1 loss:2 fully:1 proportional:1 foundation:1 integrate:3 taro:1 verification:1 xiao:1 displaying:1 story:1 bank:2 balancing:1 excitatory:3 supported:2 heading:18 understand:2 institute:1 concurrence:1 face:1 characterizing:1 taking:1 emerge:1 distributed:1 benefit:2 curve:4 cortical:1 world:1 computes:1 sensory:11 clue:2 far:1 correlate:1 uni:1 preferred:12 kullback:1 mcgraw:1 reproduces:2 investigating:1 mm0:2 neurology:1 hairston:1 reality:2 channel:1 nature:6 robust:1 kettner:1 du:2 excellent:1 investigated:1 anthony:1 protocol:1 substituted:1 marc:3 main:1 s2:13 noise:8 arise:1 paul:2 angelaki:7 complementary:2 x1:37 neuronal:12 fig:18 egg:1 fashion:1 position:1 originated:2 heeger:1 xl:2 ib:3 watkins:1 theorem:1 phkywong:1 recurrently:1 offset:5 r2:6 explored:1 wusi:1 adding:1 merging:1 magnitude:1 chen:3 intersection:2 explore:1 likely:1 visual:25 corresponds:4 determines:2 ma:1 change:1 specifically:1 except:2 averaging:1 called:3 discriminate:2 experimental:7 tendency:2 divisive:3 multisensory:57 indicating:3 rarely:1 ventriloquism:1 support:2 mark:1 dorsal:3 avoiding:1 evaluate:1 audio:1 instructive:1 |
5,878 | 6,318 | Synthesis of MCMC and Belief Propagation
Sungsoo Ahn?
Michael Chertkov?
Jinwoo Shin?
?
School of Electrical Engineering,
Korea Advanced Institute of Science and Technology, Daejeon, Korea
?1
Theoretical Division, T-4 & Center for Nonlinear Studies,
Los Alamos National Laboratory, Los Alamos, NM 87545, USA,
?2
Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
?
?
{sungsoo.ahn, jinwoos}@kaist.ac.kr
[email protected]
Abstract
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most
popular algorithms for computational inference in Graphical Models (GM). In
principle, MCMC is an exact probabilistic method which, however, often suffers
from exponentially slow mixing. In contrast, BP is a deterministic method, which is
typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting
the approximation error of BP, i.e., we provide a way to compensate for BP errors
via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus
approach which allows to express the BP error as a sum of weighted generalized
loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for
planar pair-wise binary GMs and it also provides a highly accurate approximation
empirically. Motivated by this, we first propose a polynomial-time approximation
MCMC scheme for the truncated series of general (non-planar) pair-wise binary
models. Our main idea here is to use the Worm algorithm, known to provide fast
mixing in other (related) problems, and then design an appropriate rejection scheme
to sample 2-regular loops. Furthermore, we also design an efficient rejection-free
MCMC scheme for approximating the full series. The main novelty underlying
our design is in utilizing the concept of cycle basis, which provides an efficient
decomposition of the generalized loops. In essence, the proposed MCMC schemes
run on transformed GM built upon the non-trivial BP solution, and our experiments
show that this synthesis of BP and MCMC outperforms both direct MCMC and
bare BP schemes.
1
Introduction
GMs express factorization of the joint multivariate probability distributions in statistics via graph of
relations between variables. The concept of GM has been used successfully in information theory,
physics, artificial intelligence and machine learning [1, 2, 3, 4, 5, 6]. Of many inference problems
one can set with a GM, computing partition function (normalization), or equivalently marginalizing
the joint distribution, is the most general problem of interest. However, this paradigmatic inference
problem is known to be computationally intractable in general, i.e., formally it is #P-hard even to
approximate [7, 8].
To address this obstacle, extensive efforts have been made to develop practical approximation methods,
among which MCMC- [9] based and BP- [10] based algorithms are, arguably, the most popular
and practically successful ones. MCMC is exact, i.e., it converges to the correct answer, but its
convergence/mixing is, in general, exponential in the system size. On the other hand, message
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
passing implementations of BP typically demonstrate fast convergence, however in general lacking
approximation guarantees for GM containing loops. Motivated by this complementarity of the
MCMC and BP approaches, we aim here to synthesize a hybrid approach benefiting from a joint use
of MCMC and BP.
At a high level, our proposed scheme uses BP as the first step and then runs MCMC to correct for the
approximation error of BP. To design such an ?error-correcting" MCMC, we utilize the Loop Calculus
approach [11] which allows, in a nutshell, to express the BP error as a sum (i.e., series) of weights
of the so-called generalized loops (sub-graphs of a special structure). There are several challenges
one needs to overcome. First of all, to design an efficient Markov Chain (MC) sampler, one needs to
design a scheme which allows efficient transitions between the generalized loops. Second, even if
one designs such a MC which is capable of accessing all the generalized loops, it may mix slowly.
Finally, weights of generalized loops can be positive or negative, while an individual MCMC can
only generate non-negative contributions.
Since approximating the full loop series (LS) is intractable in general, we first explore whether we
can deal with the challenges at least in the case of the truncated LS corresponding to 2-regular loops.
In fact, this problem has been analyzed in the case of the planar pairwise binary GMs [12, 13] where
it was shown that the 2-regular LS is computable exactly in polynomial-time through a reduction
to a Pfaffian (or determinant) computation [14]. In particular, the partition function of the Ising
model without external field (i.e., where only pair-wise factors present) is computable exactly via
the 2-regular LS. Furthermore, the authors show that in the case of general planar pairwise binary
GMs, the 2-regular LS provides a highly accurate approximation empirically. Motivated by these
results, we address the same question in the general (i.e., non-planar) case of pairwise binary GMs
via MCMC. For the choice of MC, we adopt the Worm algorithm [15]. We prove that with some
modification including rejections, the algorithm allows to sample (with probabilities proportional to
respective weights) 2-regular loops in polynomial-time. Then, we design a novel simulated annealing
strategy using the sampler to estimate separately positive and negative parts of the 2-regular LS.
Given any ? > 0, this leads to a ?-approximation polynomial-time scheme for the 2-regular LS under
a mild assumption.
We next turn to estimating the full LS. In this part, we ignore the theoretical question of establishing
the polynomial mixing time of a MC, and instead focus on designing an empirically efficient MCMC
scheme. We design an MC using a cycle basis of the graph [16] to sample generalized loops directly,
without rejections. It transits from one generalized loop to another by adding or deleting a random
element of the cycle basis. Using the MC sampler, we design a simulated annealing strategy for
estimating the full LS, which is similar to what was used earlier to estimate the 2-regular LS. Notice
that even though the prime focus of this paper is on pairwise binary GMs, the proposed MCMC
scheme allows straightforward generalization to general non-binary GMs.
In summary, we propose novel MCMC schemes to estimate the LS correction to the BP contribution
to the partition function. Since already the bare BP provides a highly non-trivial estimation for the
partition function, it is naturally expected and confirmed in our experimental results that the proposed
algorithm outperforms other standard (not related to BP) MCMC schemes applied to the original
GM. We believe that our approach provides a new angle for approximate inference on GM and is of
broader interest to various applications involving GMs.
2
2.1
Preliminaries
Graphical models and belief propagation
Given undirected graph G = (V, E) with |V | = n, |E| = m, a pairwise binary Markov Random
Fields (MRF) defines the following joint probability distribution on x = [xv ? {0, 1} : v ? V ]:
p(x) =
Y
1 Y
?v (xv )
?u,v (xu , xv ),
Z
v?V
Z :=
X
Y
x?{0,1}n v?V
(u,v)?E
?v (xv )
Y
?u,v, (xu , xv )
(u,v)?E
where ?v , ?u,v are some non-negative functions, called compatibility or factor functions, and the
normalization constant Z is called the partition function. Without loss of generality, we assume G
is connected. It is known that approximating the partition function is #P-hard in general [8]. Belief
Propagation (BP) is a popular message-passing heuristic for approximating marginal distributions of
2
MRF. The BP algorithm iterates the following message updates for all (u, v) ? E:
X
Y
mt+1
?u,v (xu , xv )?u (xu )
mtw?u (xu ),
u?v (xv ) ?
xu ?{0,1}
w?N (u)\v
where N (v) denotes the set of neighbors of v. In general BP may fail to converge, however in this
case one may substitute it with a somehow more involved algorithm provably convergent to its fixed
point [22, 23, 24]. Estimates for the marginal probabilities
Q are expressed via the fixed-point messages
{mu?v : (u, v) ? E} as follows: ?v (xv ) ? ?v (xv ) u?N (v) mu?v (xv ) and
?
??
?
Y
Y
?u,v (xu , xv ) ? ?u (xu )?v (xv )?u,v (xu , xv ) ?
mw?v (xu )? ?
mw?v (xv )? .
w?N (u)
2.2
w?N (v)
Bethe approximation and loop calculus
BP marginals also results in the following Bethe approximation for the partition function Z:
XX
X X
log ZBethe =
?v (xv ) log ?v (xv ) +
?u,v (xu , xv ) log ?u,v (xu , xv )
v?V xv
?
(u,v)?E xu ,xv
XX
X
?v (xv ) log ?v (xv ) ?
v?V xv
X
?u,v (xu , xv ) log
(u,v)?E xu ,xv
?u,v (xu , xv )
?u (xu )?v (xv )
If graph G is a tree, the Bethe approximation is exact, i.e., ZBethe = Z. However, in general, i.e. for
the graph with cycles, BP algorithm provides often rather accurate but still an approximation.
Loop Series (LS) [11] expresses, Z/ZBethe , as the following sum/series:
Z
ZBethe
w(F ) :=
Y
(u,v)?EF
= ZLoop :=
X
w(F ),
w(?) = 1,
F ?L
Y
?u,v (1, 1)
?1
?u (1)?v (1)
?v (1) + (?1)
v?VF
dF (v)
?v (1)
1 ? ?v (1)
dF (v)?1
!
?v (1)
where each term/weight is associated with the so-called generalized loop F and L denotes the set of
all generalized loops in graph G (including the empty subgraph ?). Here, a subgraph F of G is called
generalized loop if all vertices v ? F have degree dF (v) (in the subgraph) no smaller than 2.
Since the number of generalized loops is exponentially large, computing ZLoop is intractable in
general. However, the following truncated sum of ZLoop , called 2-regular loop series, is known to be
computable in polynomial-time if G is planar [12]:1
X
Z2-Loop :=
w(F ),
F ?L2-Loop
where L2-Loop denotes the set of all 2-regular generalized loops, i.e., F ? L2-Loop if dF (v) = 2 for
every vertex v of F . One can check that ZLoop = Z2-Loop for the Ising model without the external
fields. Furthermore, as stated in [12, 13] for the general case, Z2-Loop provides a good empirical
estimation for ZLoop .
3
Estimating 2-regular loop series via MCMC
In this section, we aim to describe how the 2-regular loop series Z2-Loop can be estimated in
polynomial-time. To this end, we first assume that the maximum degree ? of the graph G is
at most 3. This degree constrained assumption is not really restrictive since any pairwise binary
model can be easily expressed as an equivalent one with ? ? 3, e.g., see the supplementary material.
1
Note that the number of 2-regular loops is exponentially large in general.
3
The rest of this section consists of two parts. We first propose an algorithm generating a 2-regular
loop sample with the probability proportional to the absolute value of its weight, i.e.,
X
|w(F )|
?
?2-Loop (F ) := ?
|w(F )|.
,
where Z2-Loop
=
Z2-Loop
F ?L2-Loop
Note that this 2-regular loop contribution allows the following factorization: for any F ? L2-Loop ,
Y
?u,v (1, 1) ? ?u (1)?v (1)
|w(F )| =
w(e),
where w(e) := p
(1)
.
?u (1)?v (1)(1 ? ?u (1))(1 ? ?v (1))
e?F
In the second part we use the sampler constructed in the first part to design a simulated annealing
scheme to estimate Z2-Loop .
3.1
Sampling 2-regular loops
We suggest to sample the 2-regular loops distributed according to ?2-Loop through a version of the
Worm algorithm
S proposed by Prokofiev and Svistunov [15]. It can be viewed as a MC exploring
the set, L2-Loop L2-Odd , where L2-Odd is S
the set of all subgraphs of G with exactly two odd-degree
vertices. Given current state F ? L2-Loop L2-Odd , it chooses the next state F 0 as follows:
1. If F ? L2-Odd , pick a random vertex v (uniformly) from V . Otherwise, pick a random
odd-degree vertex v (uniformly) from F .
2. Choose a random neighbor u of v (uniformly) within G, and set F 0 ? F initially.
3. Update F 0 ? F ? {u, v} with the probability
?
1 |w(F ?{u,v})|
?
min
,
1
if F ? L2-Loop
?
|w(F )|
?
?
n
n |w(F ?{u,v})|
min 4
,1
else if F ? {u, v} ? L2-Loop
|w(F )|
?
?
?
?min d(v) |w(F ?{u,v})| , 1
else if F, F ? {u, v} ? L2-Odd
2d(u)
|w(F )|
Here, ? denotes
the symmetric difference and for F ? L2-Odd , its weight is defined according to
Q
w(F ) = e?F w(e). In essence, the Worm algorithm consists in either deleting or adding an edge
to the current subgraph F . From the Worm algorithm, we transition to the following algorithm which
samples 2-regular loops with probability ?2-Loop simply by adding rejection of F if F ? L2-Odd .
Algorithm 1 Sampling 2-regular loops
1:
2:
3:
4:
5:
6:
7:
8:
9:
Input: Number of trials N ; number of iterations T of the Worm algorithm
Output: 2-regular loop F .
for i = 1 ? N do
Set F ? ? and update it T times by running the Worm algorithm
if F is a 2-regular loop then
BREAK and output F .
end if
end for
Output F = ?.
The following theorem states that Algorithm 1 can generate a desired random sample in polynomialtime.
Theorem 1. Given ? > 0, choose inputs of Algorithm 1 as
N ? 1.2 n log(3? ?1 ),
and
T ? (m ? n + 1) log 2 + 4?mn4 log(3n? ?1 ).
Then, it follows that
1 X
P Algorithm 1 outputs F ? ?2-Loop (F ) ? ?.
2
F ?L2-Loop
namely, the total variation distance between ?2-Loop and the output distribution of Algorithm 1 is at
most ?.
4
The proof of the above theorem is presented in the supplementary material due to the space constraint.
In the proof, we first show that MC induced by the Worm algorithm mixes in polynomial time, and
then prove that acceptance of a 2-regular loop, i.e., line 6 of Algorithm 1, occurs with high probability.
Notice that the uniform-weight version of the former proof, i.e., fast mixing, was recently proven in
[18]. For completeness of the material exposition, we present the general case proof of interest for
us. The latter proof, i.e., high acceptance, requires to bound |L2-Loop | and |L2-Odd | to show that the
probability of sampling 2-regular loops under the Worm algorithm is 1/poly(n) for some polynomial
function poly(n).
3.2
Simulated annealing for approximating 2-regular loop series
Here we utilize Theorem 1 to describe an algorithm approximating the 2-regular LS Z2-Loop in
polynomial time. To achieve this goal, we rely on the simulated annealing strategy [19] which
requires to decide a monotone cooling schedule ?0 , ?1 , . . . , ?`?1 , ?` , where ?` corresponds to the
target counting problem and ?0 does to its relaxed easy version. Thus, designing an appropriate
cooling strategy is the first challenge to address. We will also describe how to deal with the issue
that Z2-Loop is a sum of positive and negative terms, while most simulated annealing strategies in
the literature mainly studied on sums of non-negative terms. This second challenge is related to the
so-called ?fermion sign problem? common in statistical mechanics of quantum systems [25]. Before
we describe the proposed algorithm in details, let us provide its intuitive sketch.
?
The proposed algorithm consists of two parts: a) estimating Z2-Loop
via a simulated annealing strategy
?
and b) estimating Z2-Loop /Z2-Loop
via counting samples corresponding to negative terms in the 2regular loop series. First consider the following ?-parametrized, auxiliary distribution over 2-regular
loops:
?2-Loop (F : ?) =
1
?
Z2-Loop
(?)
|w(F )|? ,
for 0 ? ? ? 1.
(2)
Note that one can generate samples approximately with probability (2) in polynomial-time using
Algorithm 1 by setting w ? w? . Indeed, it follows that for ? 0 > ?,
?
Z2-Loop
(? 0 )
?
Z2-Loop
(?)
=
X
|w(F )|?
F ?L2-Loop
0
??
|w(F )|?
?
Z2-Loop
(?)
h
i
0
= E?2-Loop (?) |w(F )|? ?? ,
where the expectation can be estimated using O(1) samples if it is ?(1), i.e., ? 0 is sufficiently close
to ?. Then, for any increasing sequence ?0 = 0, ?1 , . . . , ?n?1 , ?n = 1, we derive
?
Z2-Loop
=
?
Z2-Loop
(?n )
?
?
Z2-Loop
(?n?1 )
?
?
Z2-Loop
(?n?1 ) Z2-Loop
(?n?2 )
???
?
?
Z2-Loop
(?2 ) Z2-Loop
(?1 )
?
?
Z2-Loop
(?1 ) Z2-Loop
(?0 )
?
Z2-Loop
(0),
?
where it is know that Z2-Loop
(0), i.e., the total number of 2-regular loops, is exactly 2m?n+1 [16].
?
This allows us to estimate Z2-Loop simply by estimating E?2-Loop (?i ) |w(F )|?i+1 ??i for all i.
?
Our next step is to estimate the ratio Z2-Loop /Z2-Loop
. Let L?
2-Loop denote the set of negative 2-regular
loops, i.e.,
L?
2-Loop := {F : F ? L2-Loop , w(F ) < 0}.
Then, the 2-regular loop series can be expressed as
P
!
|w(F )|
F ?L?
2-Loop
?
?
Z2-Loop = 1 ? 2
Z
=
1
?
2P
w(F
)
<
0
Z2-Loop
,
?2?Loop
2-Loop
?
Z2-Loop
where we estimate P?2?Loop w(F ) < 0 again using samples generated by Algorithm 1.
We provide the formal description of the proposed algorithm and its error bound as follows.
5
Algorithm 2 Approximation for Z2-Loop
1: Input: Increasing sequence ?0 = 0 < ?1 < ? ? ? < ?n?1 < ?n = 1; number of samples s1 , s2 ;
number of trials N1 ; number of iterations T1 for Algorithm 1.
2: for i = 0 ? n ? 1 do
3:
Generate 2-regular loops F1 , . . . , Fs1 for ?2-Loop (?i ) using Algorithm 1 with input N1 and
T1 , and set
Hi ?
1 X
w(Fj )?i+1 ??i .
s1 j
4: end for
5: Generate 2-regular loops F1 , . . . , Fs2 for ?2-Loop using Algorithm 1 with input N2 and T2 , and
set
|{Fj : w(Fj ) < 0}|
.
s2
Q
? (1 ? 2?)2m?n+1 i Hi .
??
b2-Loop
6: Output: Z
Theorem 2. Given ?, ? > 0, choose inputs of Algorithm 2 as ?i = i/n for i = 1, 2, . . . , n ? 1,
?1
s1 ? 18144n2 ??2 wmin
dlog(6n? ?1 )e,
?1
N1 ? 1.2n log(144n??1 wmin
),
?1
T1 ? (m ? n + 1) log 2 + 4?mn4 log(48n??1 wmin
),
s2 ? 18144?(1 ? 2?)?2 ??2 dlog(3? ?1 )e,
N2 ? 1.2n log(144??1 (1 ? 2?)?1 ),
T2 ? (m ? n + 1) log 2 + 4?mn4 log(48??1 (1 ? 2?)?1 )
where wmin = mine?E w(e) and ? = P?2?Loop [w(F ) < 0]. Then, the following statement holds
#
"
b2-Loop ? Z2-Loop |
|Z
? ? ? 1 ? ?,
P
Z2-Loop
which means Algorithm 2 estimates Z2-Loop within approximation ratio 1 ? ? with high probability.
The proof of the above theorem is presented in the supplementary material due to the space constraint.
We note that all constants entering in Theorem 2 were not optimized. Theorem 2 implies that
?1
complexity of Algorithm 2 is polynomial with respect to n, 1/?, 1/? under assumption that wmin
and
?1
1 ? 2P?2?Loop [w(F ) < 0] are polynomially small. Both wmin and 1 ? 2P?2?Loop [w(F ) < 0] depend on
the choice of BP fixed point, however it is unlikely (unless a degeneracy) that these characteristics
become large. In particular, P?2?Loop [w(F ) < 0] = 0 in the case of attractive models [20].
4
Estimating full loop series via MCMC
In this section, we aim for estimating the full loop series ZLoop . To this end, we design a novel MC
sampler for generalized loops, which adds (or removes) a cycle basis or a path to (or from) the current
generalized loop. Therefore, we naturally start this section introducing necessary backgrounds on
cycle basis. Then, we turn to describe the design of MC sampler for generalized loops. Finally, we
describe a simulated annealing scheme similar to the one described in the preceding section. We also
report its experimental performance comparing with other methods.
4.1
Sampling generalized loops with cycle basis
The cycle basis C of the graph G is a minimal set of cycles which allows to represent every Eulerian
subgraph of G (i.e., subgraphs containing no odd-degree vertex) as a symmetric difference of cycles
in the set [16]. Let us characterize the combinatorial structure of the generalized loop using the cycle
basis. To this end, consider a set of paths between any pair of vertices:
P = {Pu,v : u 6= v, u, v ? V, Pu,v is a path from u to v},
n
i.e., |P| = 2 . Then the following theorem allows to decompose any generalized loop with respect
to any selected C and P.
6
Theorem 3. Consider any cycle basis C and path set P. Then, for any generalized loop F , there
exists a decomposition, B ? C ? P, such that F can be expressed as a symmetric difference of the
elements of B, i.e., F = B1 ? B2 ? ? ? ? Bk?1 ? Bk for some Bi ? B.
The proof of the above theorem is given in the supplementary material due to the space constraint.
Now given any choice of C, P, consider the following transition from F ? L, to the next state F 0 :
1. Choose, uniformly at random, an element B ? C ? P, and set F 0 ? F initially.
n
o
(
?B|)|
F ? B with probability min 1, |w(F
0
|w(F
)|
.
2. If F ? B ? L, update F ?
F
otherwise
Due to Theorem 3, it is easy to check that the proposed MC is irreducible and aperiodic, i.e., ergodic,
and the distribution of its t-th state converges to the following stationary distribution as t ? ?:
X
|w(F )|
?
|w(F )|.
,
where ZLoop
=
?Loop (F ) = ?
ZLoop
F ?LLoop
One also has a freedom in choosing C, P. To accelerate mixing of MC, we suggest to choose the
minimum weighted cycle basis C and the shortest paths P with respect to the edge weights {log w(e)}
defined in (1), which are computable using the algorithm in [16] and the Bellman-Ford algorithm
[21], respectively. This encourages transitions between generalized loops with similar weights.
4.2
Simulated annealing for approximating full loop series
Algorithm 3 Approximation for ZLoop
1: Input: Decreasing sequence ?0 > ?1 > ? ? ? > ?`?1 > ?` = 1; number of samples s0 , s1 , s2 ;
number of iterations T0 , T1 , T2 for the MC described in Section 4.1
2: Generate generalized loops F1 , ? ? ? , Fs0 by running T0 iterations of the MC described in Section
4.1 for ?Loop (?0 ), and set
s0
|w(F ? )|?0 ,
s?
where F ? = arg maxF ?{F1 ,??? ,Fs0 } |w(F )| and s? is the number of F ? sampled.
for i = 0 ? ` ? 1 do
Generate generalized loops F1 , ? ? ? , FsP
by running T1 iterations of the MC described in
1
Section 4.1 for ?Loop (?i ), and set Hi ? s11 j |w(Fj )|?i+1 ??i .
end for
Generate generalized loops F1 , ? ? ? Fs2 by running T2 iterations of the MC described in Section
4.1 for ?Loop , and set
|{Fj : w(Fj ) < 0}|
??
.
s2
bLoop ? (1 ? 2?) Q Hi U .
Output: Z
U?
3:
4:
5:
6:
7:
i
Now we are ready to describe a simulated annealing scheme for estimating ZLoop . It is similar,
in principle, with that in Section 3.2. First, we again introduce the following ?-parametrized,
?
auxiliary probability distribution ?Loop (F : ?) = |w(F )|? /ZLoop
(?). For any decreasing sequence
of annealing parameters, ?0 , ?1 , ? ? ? , ?`?1 , ?` = 1, we derive
?
ZLoop
=
?
ZLoop
(?` )
?
?
ZLoop
(?`?1 )
?
?
ZLoop
(?`?1 ) ZLoop
(?`?2 )
???
?
?
ZLoop
(?2 ) ZLoop
(?1 ) ?
?
ZLoop (?0 ).
?
?
ZLoop (?1 ) ZLoop (?0 )
?
?
Following similar procedures in Section 3.2, one can estimate ZLoop
(? 0 )/ZLoop
(?) =
0
?
E?Loop (?) [|w(F )|? ?? ] using the sampler described in Section 4.1. Moreover, ZLoop
(?0 ) =
|w(F ? )|/P?Loop (?0 ) (F ? ) is estimated by sampling generalized loop F ? with the highest probability P?Loop (?0 ) (F ? ). For large enough ?0 , the approximation error becomes relatively small since
P?Loop (?0 ) (F ? ) ? |w(F ? )|?0 dominates over the distribution. In combination, this provides a desired
approximation for ZLoop . The result is stated formally in Algorithm 3.
7
(a)
(b)
(c)
Figure 1: Plots of the log-partition function approximation error with respect to (average) interaction
strength: (a) Ising model with no external field, (b) Ising model with external fields and (c) Hard-core
model. Each point is averaged over 20 (random) models.
4.3
Experimental results
In this section, we report experimental results for computing partition function of the Ising model
and the hard-core model. We compare Algorithm 2 in Section 3 (coined MCMC-BP-2reg) and
Algorithm 3 in Section 4.2 (coined MCMC-BP-whole), with the bare Bethe approximation (coined
BP) and the popular Gibbs-sampler (coined MCMC-Gibbs). To make the comparison fair, we use the
same annealing scheme for all MCMC schemes, thus making their running times comparable. More
specifically, we generate each sample after running T1 = 1, 000 iterations of an MC and take s1 = 100
samples to compute each estimation (e.g., Hi ) at intermediate steps. For performance measure, we
use the log-partition function approximation error defined as | log Z ? log Zapprox |/| log Z|, where
Zapprox is the output of the respective algorithm. We conducted 3 experiments on the 4 ? 4 grid
graph. In our first experimental setting, we consider the Ising model with varying interaction strength
and no external (magnetic) field. To prepare the model of interest, we start from the Ising model
with uniform (ferromagnetic/attractive and anti-ferromagnetic/repulsive) interaction strength and
then add ?glassy? variability in the interaction strength modeled via i.i.d Gaussian random variables
with mean 0 and variance 0.52 , i.e. N (0, 0.52 ). In other words, given average interaction strength
0.3, each interaction strength in the model is independently chosen as N (0.3, 0.52 ). The second
experiment was conducted by adding N (0, 0.52 ) corrections to the external fields under the same
condition as in the first experiment. In this case we observe that BP often fails to converge, and use
the Concave Convex Procedure (CCCP) [23] for finding BP fixed points. Finally, we experiment with
the hard-core model on the 4 ? 4 grid graph with varying a positive parameter ? > 0, called ?fugacity?
[26]. As seen clearly in Figure 1, BP and MCMC-Gibbs are outperformed by MCMC-BP-2reg or
MCMC-BP-whole at most tested regimes in the first experiment with no external field, where in this
case, the 2-regular loop series (LS) is equal to the full one. Even in the regimes where MCMC-Gibbs
outperforms BP, our schemes correct the error of BP and performs at least as good as MCMC-Gibbs.
In the experiments, we observe that advantage of our schemes over BP is more pronounced when the
error of BP is large. A theoretical reasoning behind this observation is as follows. If the performance
of BP is good, i.e. the loop series (LS) is close to 1, the contribution of empty generalized loop, i.e.,
w(?), in LS is significant, and it becomes harder to sample other generalized loops accurately.
5
Conclusion
In this paper, we propose new MCMC schemes for approximate inference in GMs. The main novelty
of our approach is in designing BP-aware MCs utilizing the non-trivial BP solutions. In experiments,
our BP based MCMC scheme also outperforms other alternatives. We anticipate that this new
technique will be of interest to many applications where GMs are used for statistical reasoning.
Acknowledgement
This work was supported by the National Research Council of Science & Technology (NST) grant by
the Korea government (MSIP) (No. CRC-15-05-ETRI), and funding from the U.S. Department of
Energy?s Office of Electricity as part of the DOE Grid Modernization Initiative.
8
References
[1] J. Pearl, ?Probabilistic reasoning in intelligent systems: networks of plausible inference,? Morgan Kaufmann,
2014.
[2] R. G. Gallager, ?Low-density parity-check codes,? Information Theory, IRE Transactions 8(1): 21-28, 1962.
[3] R. F. Kschischang, and J. F. Brendan, ?Iterative decoding of compound codes by probability propagation in
graphical models,? Selected Areas in Communications, IEEE Journal 16(2): 219-230, 1998.
[4] M. I. Jordan, ed. ?Learning in graphical models,? Springer Science & Business Media 89, 1998.
[5] R.J. Baxter, ?Exactly solved models in statistical mechanics,? Courier Corporation, 2007.
[6] W.T. Freeman, C.P. Egon, and T.C. Owen, ?Learning low-level vision.? International journal of computer
vision 40(1): 25-47, 2000.
[7] V. Chandrasekaran, S. Nathan, and H. Prahladh, ?Complexity of Inference in Graphical Models,? Association
for Uncertainty and Artificial Intelligence, 2008
[8] M. Jerrum, and A. Sinclair, ?Polynomial-time approximation algorithms for the Ising model,? SIAM Journal
on computing 22(5): 1087-1116, 1993.
[9] C. Andrieu, N. Freitas, A. Doucet, and M. I. Jordan, ?An introduction to MCMC for machine learning,?
Machine learning 50(1-2), 5-43, 2003.
[10] J. Pearl, ?Reverend Bayes on inference engines: A distributed hierarchical approach,? Association for the
Advancement of Artificial Intelligence, 1982.
[11] M. Chertkov, and V. Y. Chernyak, ?Loop series for discrete statistical models on graphs,? Journal of
Statistical Mechanics: Theory and Experiment 2006(6): P06009, 2006.
[12] M. Chertkov, V. Y. Chernyak, and R. Teodorescu, ?Belief propagation and loop series on planar graphs,?
Journal of Statistical Mechanics: Theory and Experiment 2008(5): P05003, 2008.
[13] V. Gomez, J. K. Hilbert, and M. Chertkov, ?Approximate inference on planar graphs using Loop Calculus
and Belief Propagation,? The Journal of Machine Learning Research, 11: 1273-1296, 2010.
[14] P. W. Kasteleyn, ?The statistics of dimers on a lattice,? Classic Papers in Combinatorics. Birkh?user
Boston, 281-298, 2009.
[15] N. Prokof?ev, and B. Svistunov, ?Worm algorithms for classical statistical models,? Physical review letters
87(16): 160601, 2001.
[16] J.D. Horton, ?A polynomial-time algorithm to find the shortest cycle basis of a graph.? SIAM Journal on
Computing 16(2): 358-366, 1987. APA
[17] H. A. Kramers, and G. H. Wannier, ?Statistics of the two-dimensional ferromagnet. Part II,? Physical
Review 60(3): 263, 1941.
[18] A. Collevecchio, T. M. Garoni, T.Hyndman, and D. Tokarev, ?The worm process for the Ising model is
rapidly mixing,? arXiv preprint arXiv:1509.03201, 2015.
[19] S. Kirkpatrick, ?Optimization by simulated annealing: Quantitative studies.? Journal of statistical physics
34(5-6): 975-986, 1984.
[20] R. Nicholas, ?The Bethe partition function of log-supermodular graphical models,? Advances in Neural
Information Processing Systems. 2012.
[21] J. Bang, J., and G. Z. Gutin. ?Digraphs: theory, algorithms and applications.? Springer Science & Business
Media, 2008.
[22] Y. W. Teh and M. Welling, ?Belief optimization for binary networks: a stable alternative to loopy belief
propagation,? Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, 493-500,
2001.
[23] A. L. Yuille, ?CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives
to belief propagation,? Neural Computation, 14(7): 1691-1722, 2002.
[24] J. Shin, ?The complexity of approximating a Bethe equilibrium,? Information Theory, IEEE Transactions
on, 60(7): 3959-3969, 2014.
[25] https://www.quora.com/Statistical-Mechanics-What-is-the-fermion-sign-problem
[26] Dyer, M., Frieze, A., and Jerrum, M. ?On counting independent sets in sparse graphs,? SIAM Journal on
Computing 31(5): 1527-1541, 2002.
[27] J. Schweinsberg, ?An O(n2 ) bound for the relaxation time of a Markov chain on cladograms.? Random
Structures & Algorithms 20(1): 59-70, 2002.
9
| 6318 |@word mild:1 trial:2 determinant:1 version:3 polynomial:15 calculus:4 decomposition:2 pick:2 harder:1 reduction:1 series:21 outperforms:4 freitas:1 current:3 z2:37 comparing:1 com:1 partition:11 remove:1 plot:1 update:4 stationary:1 intelligence:4 selected:2 advancement:1 core:3 provides:8 iterates:1 completeness:1 ire:1 constructed:1 direct:1 become:1 initiative:1 prove:2 consists:3 fermion:2 introduce:2 pairwise:6 indeed:1 wannier:1 expected:1 mechanic:5 bellman:1 freeman:1 decreasing:2 gov:1 increasing:2 becomes:2 spain:1 estimating:9 underlying:1 xx:2 moreover:1 medium:2 what:2 reverend:1 finding:1 corporation:1 guarantee:1 quantitative:1 every:2 modernization:1 concave:1 nutshell:1 exactly:5 control:1 grant:1 arguably:1 positive:4 before:1 engineering:1 t1:6 xv:27 chernyak:2 establishing:1 path:5 approximately:1 fs1:1 studied:1 etri:1 factorization:2 bi:1 averaged:1 practical:1 mn4:3 procedure:2 shin:2 area:1 empirical:1 courier:1 word:1 regular:35 suggest:2 close:2 www:1 equivalent:1 deterministic:1 center:1 eighteenth:1 straightforward:1 l:16 independently:1 ergodic:1 convex:1 correcting:2 subgraphs:2 utilizing:2 classic:1 variation:1 target:1 gm:17 user:1 exact:3 us:1 designing:3 complementarity:1 synthesize:1 element:3 cooling:2 ising:9 preprint:1 electrical:1 solved:1 ferromagnetic:2 cycle:14 connected:1 highest:1 accessing:1 mu:2 complexity:3 mine:1 depend:1 yuille:1 upon:1 division:1 egon:1 basis:11 easily:1 joint:4 accelerate:1 various:1 fast:4 describe:7 monte:1 birkh:1 artificial:4 choosing:1 heuristic:1 supplementary:4 kaist:1 plausible:1 otherwise:2 statistic:3 jerrum:2 ford:1 sequence:4 advantage:1 propose:4 interaction:6 loop:148 rapidly:1 mixing:7 subgraph:5 achieve:1 benefiting:1 intuitive:1 description:1 pronounced:1 los:2 convergence:2 empty:2 generating:1 converges:2 kikuchi:1 derive:2 develop:1 ac:1 odd:11 school:1 auxiliary:2 implies:1 aperiodic:1 correct:3 material:5 crc:1 government:1 f1:6 generalization:1 really:1 preliminary:1 decompose:1 anticipate:1 exploring:1 correction:2 hold:1 practically:1 sufficiently:1 equilibrium:1 dimer:1 consecutive:1 adopt:1 estimation:3 outperformed:1 combinatorial:1 prepare:1 council:1 successfully:1 weighted:2 clearly:1 gaussian:1 aim:3 rather:1 varying:2 broader:1 office:1 focus:2 check:3 mainly:1 contrast:1 brendan:1 inference:9 typically:2 unlikely:1 initially:2 relation:1 transformed:1 provably:1 compatibility:1 issue:1 among:1 arg:1 constrained:1 special:1 marginal:2 field:8 aware:2 equal:1 sampling:5 t2:4 report:2 intelligent:1 irreducible:1 kasteleyn:1 frieze:1 national:2 individual:1 n1:3 freedom:1 interest:5 message:4 acceptance:2 highly:3 kirkpatrick:1 analyzed:1 behind:1 chain:3 glassy:1 accurate:3 edge:2 capable:1 necessary:1 korea:3 respective:2 unless:1 tree:1 desired:2 theoretical:3 minimal:1 earlier:1 obstacle:1 sungsoo:2 lattice:1 loopy:2 electricity:1 introducing:1 vertex:7 alamo:2 uniform:2 successful:2 conducted:2 characterize:1 answer:1 chooses:1 density:1 international:1 siam:3 probabilistic:2 physic:2 decoding:1 michael:1 synthesis:2 horton:1 again:2 nm:1 containing:2 choose:5 russia:1 slowly:1 sinclair:1 external:7 b2:3 combinatorics:1 msip:1 break:1 start:2 bayes:1 polynomialtime:1 contribution:4 minimize:1 accuracy:1 variance:1 characteristic:1 kaufmann:1 accurately:1 mc:18 carlo:1 confirmed:1 suffers:1 ed:1 energy:2 involved:1 naturally:2 associated:1 proof:7 degeneracy:1 sampled:1 popular:4 hilbert:1 schedule:1 supermodular:1 planar:8 gutin:1 though:1 generality:1 furthermore:3 hand:1 sketch:1 nonlinear:1 propagation:9 somehow:1 defines:1 believe:1 usa:1 concept:2 former:1 andrieu:1 entering:1 symmetric:3 laboratory:1 deal:2 attractive:2 encourages:1 essence:2 generalized:27 demonstrate:1 performs:1 fj:6 reasoning:3 wise:3 novel:3 ef:1 recently:1 funding:1 common:1 mt:1 empirically:4 physical:2 exponentially:3 association:2 marginals:1 significant:1 nst:1 gibbs:5 grid:3 stable:1 ahn:2 add:2 pu:2 multivariate:1 prime:1 compound:1 binary:10 s11:1 fs2:2 seen:1 minimum:1 morgan:1 relaxed:1 preceding:1 novelty:2 converge:2 shortest:2 paradigmatic:1 ii:1 full:9 mix:2 compensate:1 cccp:2 involving:1 mrf:2 hyndman:1 vision:2 expectation:1 df:4 arxiv:2 iteration:7 normalization:2 represent:1 background:1 separately:1 annealing:13 else:2 rest:1 induced:1 undirected:1 jordan:2 mw:2 counting:3 p06009:1 intermediate:1 easy:2 fsp:1 enough:1 baxter:1 idea:1 computable:5 maxf:1 t0:2 whether:1 motivated:3 effort:1 passing:2 generate:9 http:1 notice:2 sign:2 estimated:3 discrete:1 express:4 utilize:2 graph:17 relaxation:1 monotone:1 sum:6 run:2 angle:1 letter:1 uncertainty:2 chandrasekaran:1 decide:1 vf:1 comparable:1 bound:3 hi:5 apa:1 gomez:1 convergent:2 strength:6 constraint:3 bp:41 nathan:1 min:4 relatively:1 department:1 according:2 combination:1 fs0:2 smaller:1 modification:1 s1:5 making:1 dlog:2 computationally:2 turn:2 fail:1 know:1 dyer:1 end:7 repulsive:1 observe:2 hierarchical:1 appropriate:2 magnetic:1 nicholas:1 alternative:3 eulerian:1 original:1 substitute:1 moscow:1 pfaffian:1 denotes:4 running:6 graphical:6 coined:4 restrictive:1 approximating:8 classical:1 question:2 already:1 occurs:1 strategy:6 distance:1 simulated:11 parametrized:2 transit:1 trivial:3 code:2 modeled:1 ratio:2 equivalently:1 statement:1 negative:8 stated:2 design:13 implementation:1 teh:1 observation:1 markov:4 anti:1 truncated:4 variability:1 communication:1 bk:2 pair:4 namely:1 lanl:1 extensive:1 optimized:1 engine:1 barcelona:1 pearl:2 nip:1 address:3 ev:1 regime:2 challenge:4 built:1 including:2 belief:9 deleting:2 business:2 hybrid:1 rely:1 advanced:1 scheme:21 technology:3 fugacity:1 ready:1 bare:3 review:2 literature:1 l2:21 acknowledgement:1 marginalizing:1 lacking:2 loss:1 skolkovo:1 proportional:2 proven:1 degree:6 s0:2 principle:2 kramers:1 summary:1 supported:1 parity:1 free:2 formal:1 institute:2 neighbor:2 wmin:6 absolute:1 sparse:1 distributed:2 overcome:1 transition:4 quantum:1 author:1 made:1 polynomially:1 welling:1 transaction:2 approximate:4 ignore:1 doucet:1 summing:1 b1:1 iterative:1 bethe:7 kschischang:1 poly:2 main:3 s2:5 whole:2 n2:4 fair:1 xu:17 slow:1 sub:1 fails:1 exponential:1 chertkov:5 theorem:12 dominates:1 intractable:4 exists:1 adding:4 kr:1 jinwoos:1 rejection:5 jinwoo:1 boston:1 simply:2 explore:1 gallager:1 expressed:4 springer:2 corresponds:1 viewed:1 goal:1 bang:1 exposition:1 digraph:1 owen:1 hard:5 daejeon:1 specifically:1 uniformly:4 sampler:8 worm:11 called:8 total:2 experimental:5 formally:2 latter:1 mcmc:36 reg:2 tested:1 |
5,879 | 6,319 | Reshaped Wirtinger Flow for
Solving Quadratic System of Equations
Huishuai Zhang
Department of EECS
Syracuse University
Syracuse, NY 13244
[email protected]
Yingbin Liang
Department of EECS
Syracuse University
Syracuse, NY 13244
[email protected]
Abstract
We study the problem of recovering a vector x ? Rn from its magnitude measurements yi = |hai , xi|, i = 1, ..., m. Our work is along the line of the Wirtinger flow
(WF) approach Cand?s et al. [2015], which solves the problem by minimizing a
nonconvex loss function via a gradient algorithm and can be shown to converge
to a global optimal point under good initialization. In contrast to the smooth loss
function used in WF, we adopt a nonsmooth but lower-order loss function, and
design a gradient-like algorithm (referred to as reshaped-WF). We show that for
random Gaussian measurements, reshaped-WF enjoys geometric convergence to
a global optimal point as long as the number m of measurements is at the order
of O(n), where n is the dimension of the unknown x. This improves the sample
complexity of WF, and achieves the same sample complexity as truncated-WF
Chen and Candes [2015] but without truncation at gradient step. Furthermore,
reshaped-WF costs less computationally than WF, and runs faster numerically than
both WF and truncated-WF. Bypassing higher-order variables in the loss function
and truncations in the gradient loop, analysis of reshaped-WF is simplified.
1
Introduction
Recovering a signal via a quadratic system of equations has gained intensive attention recently.
More specifically, suppose a signal of interest x ? Rn /Cn is measured via random design vectors
ai ? Rn /Cn with the measurements yi given by
yi = |hai , xi| ,
for i = 1, ? ? ? , m,
(1)
2
which can also be written equivalently in a quadratic form as yi0 = |hai , xi| . The goal is to recover
m
the signal x based on the measurements y = {yi }m
i=1 and the design vectors {ai }i=1 . Such a
problem arises naturally in the phase retrieval applications, in which the sign/phase of the signal is
to be recovered from only measurements of magnitudes. Various algorithms have been proposed to
solve this problem since 1970s. The error-reduction methods proposed in Gerchberg [1972], Fienup
[1982] work well empirically but lack theoretical guarantees. More recently, convex relaxation of
the problem has been formulated, for example, via phase lifting Chai et al. [2011], Cand?s et al.
[2013], Gross et al. [2015] and via phase cut Waldspurger et al. [2015], and the correspondingly
developed algorithms typically come with performance guarantee. The reader can refer to the review
paper Shechtman et al. [2015] to learn more about applications and algorithms of the phase retrieval
problem.
While with good theoretical guarantee, these convex methods often suffer from computational complexity particularly when the signal dimension is large. On the other hand, more efficient nonconvex
approaches have been proposed and shown to recover the true signal as long as initialization is good
enough. Netrapalli et al. [2013] proposed AltMinPhase algorithm, which alternatively updates the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
phase and the signal with each signal update solving a least-squares problem, and showed that AltMinPhase converges linearly and recovers the true signal with O(n log3 n) Gaussian measurements.
More recently, Cand?s et al. [2015] introduces Wirtinger flow (WF) algorithm, which guarantees
signal recovery via a simple gradient algorithm with only O(n log n) Gaussian measurements and
attains ?accuracy within O(mn2 log 1/) flops. More specifically, WF obtains good initialization
by the spectral method, and then minimizes the following nonconvex loss function
m
`W F (z) :=
1 X T 2
(|a z| ? yi2 )2 ,
4m i=1 i
(2)
via the gradient descent scheme.
WF was further improved by truncated Wirtinger flow (truncated-WF) algorithm proposed in Chen
and Candes [2015], which adopts a Poisson loss function of |aTi z|2 , and keeps only well-behaved
measurements based on carefully designed truncation thresholds for calculating the initial seed and
every step of gradient . Such truncation assists to yield linear convergence with certain fixed step size
and reduces both the sample complexity to (O(n)) and the convergence time to (O(mn log 1/)).
It can be observed that WF uses the quadratic loss of |aTi z|2 so that the optimization objective is a
smooth function of aTi z and the gradient step becomes simple. But this comes with a cost of a quartic
loss function. In this paper, we adopt the quadratic loss of |aTi z|. Although the loss function is not
smooth everywhere, it reduces the order of aTi z to be two, and the general curvature can be more
amenable to convergence of the gradient algorithm. The goal of this paper is to explore potential
advantages of such a nonsmooth lower-order loss function.
1.1
Our Contribution
This paper adopts the following loss function1
m
2
1 X
`(z) :=
|aTi z| ? yi .
2m i=1
(3)
Compared to the loss function (2) in WF that adopts |aTi z|2 , the above loss function adopts the
absolute value/magnitude |aTi z| and hence has lower-order variables. For such a nonconvex and
nonsmooth loss function, we develop a gradient descent-like algorithm, which sets zero for the
?gradient" component corresponding to nonsmooth samples. We refer to such an algorithm together
with truncated initialization using spectral method as reshaped Wirtinger flow (reshaped-WF). We
show that the lower-order loss function has great advantage in both statistical and computational
efficiency, although scarifying smoothness. In fact, the curvature of such a loss function behaves
similarly to that of a least-squares loss function in the neighborhood of global optimums (see Section
2.2), and hence reshaped-WF converges fast. The nonsmoothness does not significantly affect the
convergence of the algorithm because only with negligible probability the algorithm encounters
nonsmooth points for some samples, which furthermore are set not to contribute to the gradient
direction by the algorithm. We summarize our main results as follows.
? Statistically, we show that reshaped-WF recovers the true signal with O(n) samples, when
the design vectors consist of independently and identically distributed (i.i.d.) Gaussian
entries, which is optimal in the order sense. Thus, even without truncation in gradient
steps (truncation only in initialization stage), reshaped WF improves the sample complexity
O(n log n) of WF, and achieves the same sample complexity as truncated-WF. It is thus
more robust to random measurements.
? Computationally, reshaped-WF converges geometrically, requiring O(mn log 1/) flops
to reach ?accuracy. Again, without truncation in gradient steps, reshaped-WF improves
computational cost O(mn2 log(1/) of WF and achieves the same computational cost as
truncated-WF. Numerically, reshaped-WF is generally two times faster than truncated-WF
and four to six times faster than WF in terms of the number of iterations and time cost.
Compared to WF and truncated-WF, our technical proof of performance guarantee is much simpler,
because the lower-order loss function allows to bypass higher-order moments of variables and
1
The loss function (3) was also used in Fienup [1982] to derive a gradient-like update for the phase retrieval
problem with Fourier magnitude measurements. However, our paper is to characterize global convergence
guarantee for such an algorithm with appropriate initialization, which was not studied in Fienup [1982].
2
truncation in gradient steps. We also anticipate that such analysis is more easily extendable. On the
other hand, the new form of the gradient step due to nonsmoothness of absolute function requires
new developments of bounding techniques.
1.2
Connection to Related Work
Along the line of developing nonconvex algorithms with global performance guarantee for the phase
retrieval problem, Netrapalli et al. [2013] developed alternating minimization algorithm, Cand?s et al.
[2015], Chen and Candes [2015], Zhang et al. [2016], Cai et al. [2015] developed/studied first-order
gradient-like algorithms, and a recent study Sun et al. [2016] characterized geometric structure of the
nonconvex objective and designed a second-order trust-region algorithm. Also notably is Wei [2015],
which empirically demonstrated fast convergence of a so-called Kaczmarz stochastic algorithm. This
paper is most closely related to Cand?s et al. [2015], Chen and Candes [2015], Zhang et al. [2016],
but develops a new gradient-like algorithm based on a lower-order nonsmooth (as well as nonconvex)
loss function that yields advantageous statistical/computational efficiency.
Various algorithms have been proposed for minimizing a general nonconvex nonsmooth objective,
such as gradient sampling algorithm Burke et al. [2005], Kiwiel [2007] and majorization-minimization
method Ochs et al. [2015]. These algorithms were often shown to convergence to critical points
which may be local minimizers or saddle points, without explicit characterization of convergence
rate. In contrast, our algorithm is specifically designed for the phase retrieval problem, and can be
shown to converge linearly to global optimum under appropriate initialization.
The advantage of nonsmooth loss function exhibiting in our study is analogous in spirit to that of
the rectifier activation function (of the form max{0, ?}) in neural networks. It has been shown that
rectified linear unit (ReLU) enjoys superb advantage in reducing the training time Krizhevsky et al.
[2012] and promoting sparsity Glorot et al. [2011] over its counterparts of sigmoid and hyperbolic
tangent functions, in spite of non-linearity and non-differentiability at zero. Our result in fact also
demonstrates that a nonsmooth but simpler loss function yields improved performance.
1.3
Paper Organization and Notations
The rest of this paper is organized as follows. Section 2 describes reshaped-WF algorithm in detail
and establishes its performance guarantee. In particular, Section 2.2 provides intuition about why
reshaped-WF is fast. Section 3 compares reshaped-WF with other competitive algorithms numerically.
Finally, Section 4 concludes the paper with comments on future directions.
Throughout the paper, boldface lowercase letters such as ai , x, z denote vectors, and boldface capital
letters such as A, Y denote matrices. For two matrices, A B means that B ? A is positive definite.
The indicator function 1A = 1 if the event A is true, and 1A = 0 otherwise. The Euclidean distance
between two vectors up to a global sign difference is defined as dist(z, x) := min{kz ?xk, kz +xk}.
2
Algorithm and Performance Guarantee
In this paper, we wish to recover a signal x ? Rn based on m measurements yi given by
yi = |hai , xi| ,
for i = 1, ? ? ? , m,
(4)
n
where ai ? R for i = 1, ? ? ? , m are known measurement vectors generated by Gaussian distribution
N (0, I n?n ). We focus on the real-valued case in analysis, but the algorithm designed below is
applicable to the complex-valued case and the case with coded diffraction pattern (CDP) as we
demonstrate via numerical results in Section 3.
We design reshaped-WF (see Algorithm 1) for solving the above problem, which contains two stages:
spectral initialization and gradient loop. Suggested values for parameters are ?l = 1, ?u = 5 and
? = 0.8. The scaling parameter in ?0 and the conjugate transpose a?i allow the algorithm readily
applicable to complex and CDP cases. We next describe the two stages of the algorithm in detail in
Sections 2.1 and 2.2, respectively, and establish the convergence of the algorithm in Section 2.3.
2.1
Initialization via Spectral Method
We first note that initialization can adopt the spectral initialization method for WF in Cand?s et al.
[2015] or that for truncated-WF in Chen and Candes [2015], both of which are based on |a?i x|2 .
Here, we propose an alternative initialization in Algorithm 1 that uses magnitude |a?i x| instead, and
truncates samples with both lower and upper thresholds as in (5). We show that such initialization
achieves smaller sample complexity than WF and the same order-level sample complexity as truncatedWF, and furthermore, performs better than both WF and truncated-WF numerically.
3
Algorithm 1 Reshaped Wirtinger Flow
m
Input: y = {yi }m
i=1 , {ai }i=1 ;
Parameters: Lower and upper thresholds ?l , ?u for truncationPin initialization,
stepsize ?;
m
1
? , where ?0 = Pm mn
?
?
y
and
z
is
the leading eigenInitialization: Let z (0) = ?0 z
i
i=1
m
i=1 kai k1
vector of
m
1 X
Y :=
yi ai a?i 1{?l ?0 <yi <?u ?0 } .
(5)
m i=1
Gradient loop: for t = 0 : T ? 1 do
z
(t+1)
=z
(t)
m
a?i z (t)
? X
? (t)
ai z ? yi ? ? (t) ai .
?
m i=1
|ai z |
(6)
Output z (T ) .
Our initialization consists of estimation of both the norm and direction of x. The norm estimation
of x is given by ?0 in Algorithm 1 with mathematical justification in
A. Intuitively, with
p Suppl.
?
real Gaussian measurements, the scaling coefficient Pm mn
?
.
Moreover,
yi = |aTi x|
2
i=1 kai k1
q
are independent sub-Gaussian random variables for i = 1, . . . , m with mean ?2 kxk, and thus
q
Pm
1
2
y
?
i
i=1
m
? kxk. Combining these two facts yields the desired argument.
1.1
reshaped-WF
truncated-WF
WF
1
Relative error
The direction of x is approximated by the leading eigenvector of Y , because Y approaches E[Y ] by concentration of measure and the leading eigenvector of E[Y ]
takes the form cx for some scalar c ? R. We note that
(5) involves truncation of samples from both sides, in
contrast to truncation only by an upper threshold in
Chen and Candes [2015]. This is because yi = |aTi x|2
in Chen and Candes [2015] so that small |aTi x| is further reduced by the square power to contribute less
in Y , but small values of yi = |aTi x| can still introduce considerable contributions and hence should be
truncated by the lower threshold.
0.9
0.8
0.7
0.6
0
1000
2000
3000
4000
5000
6000
7000
8000
n: signal dimension
We next provide the formal statement of the perfor- Figure 1: Comparison of three initialization
mance guarantee for the initialization step that we pro- methods with m = 6n and 50 iterations
pose. The proof adapts that in Chen and Candes [2015] using power method.
and is provided in Suppl. A.
Proposition 1. Fix ? > 0. The initialization step in
Algorithm 1 yields z (0) satisfying kz (0) ? xk ? ?kxk with probability at least 1 ? exp(?c0 m2 ),
if m > C(?, )n, where C is a positive number only affected by ? and , and c0 is some positive
constant.
Finally, Figure 1 demonstrates that reshaped-WF achieves better initialization accuracy in terms of
(0)
?xk
the relative error kz kxk
than WF and truncated-WF with Gaussian measurements.
2.2
Gradient Loop and Why Reshaped-WF is Fast
The gradient loop of Algorithm 1 is based on the loss function (3), which is rewritten below
m
2
1 X
`(z) :=
|aTi z| ? yi .
2m i=1
We define the update direction as
m
m
1 X T
1 X
aT z
?`(z) :=
ai z ? yi ? sgn(aTi z) ai =
aTi z ? yi ? Ti
ai ,
m i=1
m i=1
|ai z|
4
(7)
(8)
0
where sgn(?) is the sign function for nonzero arguments. We further set sgn(0) = 0 and |0|
= 0.
T
In fact, ?`(z) equals the gradient of the loss function (7) if ai z 6= 0 for all i = 1, ..., m. For
samples with nonsmooth point, i.e., aTi z = 0, we adopt Fr?chet superdifferential Kruger [2003] for
nonconvex function to set the corresponding gradient component to be zero (as zero is an element in
Fr?chet superdifferential). With abuse of terminology, we still refer to ?`(z) in (8) as ?gradient? for
simplicity, which rather represents the update direction in the gradient loop of Algorithm 1.
We next provide the intuition about why reshaped WF is fast. Suppose that the spectral method sets
an initial point in the neighborhood of ground truth x. We compare reshaped-WF with the following
problem of solving x from linear equations yi = hai , xi with yi and ai for i = 1, . . . , m given. In
particular, we note that this problem has both magnitude and sign observation of the measurements.
Further suppose that the least-squares loss is used and gradient descent is applied to solve this
problem. Then the gradient is given by
m
1 X T
a z ? aTi x ai .
m i=1 i
Least square gradient: ?`LS (z) =
(9)
We now argue informally that the gradient (8) of reshaped-WF behaves similarly to the least-squares
gradient (9). For each i, the two gradient components are close if |aTi x| ? sgn(aTi z) is viewed as
an estimate of aTi x. The following lemma (see Suppl. B.2 for the proof) shows that if dist(z, x) is
small (guaranteed by initialization), then aTi z has the same sign with aTi x for large |aTi x|.
?
2?1
?
kxk,
2
Lemma 1. Let ai ? N (0, I n?n ). For any given x and z satisfying kx ? zk <
P{(aTi x)(aTi z)
where h = z ? x and erfc(u) :=
< 0(aTi x)2 = tkxk2 } ? erfc
?2
?
R?
u
?
tkxk
2khk
we have
,
(10)
exp(?? 2 )d? .
It is easy to observe in (10) that large aTi x is likely to have the same sign as aTi z so that the
corresponding gradient components in (8) and (9) are likely equal, whereas small aTi x may have
different sign as aTi z but contributes less to the gradient. Hence, overall the two gradients (8) and (9)
should be close to each other with a large probability.
This fact can be further verified numerically. Figure 2(a) illustrates that reshaped-WF takes almost
the same number of iterations for recovering a signal (with only magnitude information) as the leastsquares gradient descent method for recovering a signal (with both magnitude and sign information).
100
Least-squares gradient
RWF
Relative error
10
5
4.5
-5
160
4
140
3.5
120
3
10-10
100
2.5
10-15
2
80
1.5
60
1
40
0.5
10-20
0
-2
0
20
40
60
80
100
Number of iterations
(a) Convergence behavior
20
-2
-1.5
-1
-1
-0.5
0
0
0.5
1
1
1.5
2
z2
2
z1
(b) Expected loss of reshaped-WF
0
2
2
1.5
1
1
0.5
0
-0.5
z2
0
-1
-1.5
-1
-2
-2
z1
(c) Expected loss of WF
Figure 2: Intuition of why reshaped-WF fast. (a) Comparison of convergence behavior between
reshaped-WF and least-squares gradient descent. Initialization and parameters are the same for two
methods: n = 1000, m = 6n, step size ? = 0.8. (b) Expected loss function of reshaped-WF for
x = [1 ? 1]T . (c) Expected loss function of WF for x = [1 ? 1]T .
Figure 2(b) further illustrates that the expected loss surface of reshaped-WF (see Suppl. B for
expression) behaves similarly to a quadratic surface around the global optimums as compared to the
expected loss surface for WF (see Suppl. B for expression) in Figure 2(c).
5
2.3
Geometric Convergence of Reshaped-WF
We characterize the convergence of reshaped-WF in the following theorem.
Theorem 1. Consider the problem of solving any given x ? Rn from a system of equations (4) with
Gaussian measurement vectors. There exist some universal constants ?0 > 0 (?0 can be set as 0.8 in
practice), 0 < ?, ? < 1 and c0 , c1 , c2 > 0 such that if m ? c0 n and ? < ?0 , then with probability at
least 1 ? c1 exp(?c2 m), Algorithm 1 yields
dist(z (t) , x) ? ?(1 ? ?)t kxk,
?t ? N.
(11)
Outline of the Proof. We outline the proof here with details relegated to Suppl. C. Compared to WF
and truncated-WF, our proof is much simpler due to the lower-order loss function that reshaped-WF
relies on.
The central idea is to show that within the neighborhood of global optimums, reshaped-WF satisfies
the Regularity Condition RC(?, ?, c) Chen and Candes [2015], i.e.,
?
?
2
h?`(z), hi ? k?`(z)k + khk2
(12)
2
2
for all z and h = z ? x obeying khk ? ckxk, where 0 < c < 1 is some constant. Then, as shown in
Chen and Candes [2015], once the initialization lands into this neighborhood, geometric convergence
can be guaranteed, i.e.,
dist2 (z + ??`(z), x) ? (1 ? ??)dist2 (z, x),
for any z with kz ? xk ? kxk.
(13)
Lemmas 2 and 3 in Suppl.C yield that
h?`(z), hi ? (1 ? 0.26 ? 2)khk2 = (0.74 ? 2)khk2 .
And Lemma 4 in Suppl.C further yields that
k?`(z)k ? (1 + ?) ? 2khk.
(14)
Therefore, the above two bounds imply that Regularity Condition (12) holds for ? and ? satisfying
?
?
0.74 ? 2 ? ? 4(1 + ?)2 + .
(15)
2
2
We note that (15) implies an upper bound ? ? 0.74
2 = 0.37, by taking and ? to be sufficiently small.
This suggests a range to set the step size in Algorithm 1. However, in practice, ? can be set much
larger than such a bound, say 0.8, while still keeping the algorithm convergent. This is because the
coefficients in the proof are set for convenience of proof rather than being tightly chosen.
Theorem 1 indicates that reshaped-WF recovers the true signal with O(n) samples, which is orderlevel optimal. Such an algorithm improves the sample complexity O(n log n) of WF. Furthermore,
reshaped-WF does not require truncation of weak samples in the gradient step to achieve the same
sample complexity as truncated-WF. This is mainly because reshaped-WF benefits from the lowerorder loss function given in (7), the curvature of which behaves similarly to the least-squares loss
function locally as we explain in Section 2.2.
Theorem 1 also suggests that reshaped-WF converges geometrically at a constant step size. To
reach ?accuracy, it requires computational cost of O(mn log 1/) flops, which is better than WF
(O(mn2 log(1/)). Furthermore, it does not require truncation in gradient steps to reach the same
computational cost as truncated-WF. Numerically, as we demonstrate in Section 3, reshaped-WF is
two times faster than truncated-WF and four to six times faster than WF in terms of both iteration
count and time cost in various examples.
Although our focus in this paper is on the noise-free model, reshaped-WF can be applied to noisy
m
models
? as well. Suppose the measurements are corrupted by bounded noises {?i }i=1 satisfying
k?k/ m ? ckxk. Then by adapting the proof of Theorem 1, it can be shown that the gradient loop
of reshaped-WF is robust such that
k?k
dist(z (t) , x) . ? + (1 ? ?)t kxk, ?t ? N,
(16)
m
for some ? ? (0, 1). The numerical result under the Poisson noise model in Section 3 further
corroborates the stability of reshaped-WF.
6
Table 1: Comparison of iteration count and time cost among algorithms (n = 1000, m = 8n)
real case
complex case
3
Algorithms
reshaped-WF
truncated-WF
WF
AltMinPhase
iterations
time cost(s)
iterations
time cost(s)
72
0.477
272.7
6.956
182
1.232
486.7
12.815
319.2
2.104
915.4
23.306
5.8
0.908
156
93.22
Numerical Comparison with Other Algorithms
In this section, we demonstrate the numerical efficiency of reshaped-WF by comparing its performance with other competitive algorithms. Our experiments are run not only for real-valued case but
also for complex-valued and CDP cases. All the experiments are implemented in Matlab 2015b and
carried out on a computer equipped with Intel Core i7 3.4GHz CPU and 12GB RAM.
We first compare the sample complexity of reshaped-WF with those of truncated-WF and WF
via the empirical successful recovery rate versus the number of measurements. For reshaped-WF,
we follow Algorithm 1 with suggested parameters. For truncated-WF and WF, we use the codes
provided in the original papers with the suggested parameters. We conduct the experiment for real,
complex and CDP cases respectively. For real and complex cases, we set the signal dimension n
to be 1000, and the ratio m/n take values from 2 to 6 by a step size 0.1. For each m, we run 100
trials and count the number of successful trials. For each trial, we run a fixed number of iterations
T = 1000 for all algorithms. A trial is declared to be successful if z (T ) , the output of the algorithm,
satisfies dist(z (T ) , x)/kxk ? 10?5 . For the real case, we generate signal x ? N (0, I n?n ), and the
measurement vectors ai ? N (0, I n?n ) i.i.d. for i = 1, . . . , m. For the complex case, we generate
signal x ? N (0, I n?n ) + jN (0, I n?n ) and measurements ai ? 12 N (0, I n?n ) + j 12 N (0, I n?n )
i.i.d. for i = 1, . . . , m. For the CDP case, we generate signal x ? N (0, I n?n ) + jN (0, I n?n ) that
yields measurements
y (l) = |F D (l) x|,
1 ? l ? L,
(17)
where F represents the discrete Fourier transform (DFT) matrix, and D (l) is a diagonal matrix
(mask). We set n = 1024 for convenience of FFT and m/n = L = 1, 2, . . . , 8. All other settings are
the same as those for the real case.
0.6
0.4
reshaped-WF
truncated-WF
WF
0.2
0
2n
3n
4n
5n
m: Number of measurements (n=1000)
(a) Real case
1
0.8
0.8
0.6
0.6
0.4
0.4
reshaped-WF
truncated-WF
WF
0.2
0
6n
2n
Empirical success rate
1
Empirical success rate
Empirical success rate
1
0.8
3n
4n
5n
m: Number of measurements (n=1000)
(b) Complex case
6n
reshaped-WF
truncated-WF
WF
0.2
0
1n
2n
3n
4n
5n
6n
7n
8n
m: Number of measurements (n=1024)
(c) CDP case
Figure 3: Comparison of sample complexity among reshaped-WF, truncated-WF and WF.
Figure 3 plots the fraction of successful trials out of 100 trials for all algorithms, with respect to m. It
can be seen that for although reshaped-WF outperforms only WF (not truncated-WF) for the real
case, it outperforms both WF and truncated-WF for complex and CDP cases. An intuitive explanation
for the real case is that a substantial number of samples with small |aTi z| can deviate gradient so
that truncation indeed helps to stabilize the algorithm if the number of measurements is not large.
Furthermore, reshaped-WF exhibits shaper transition than truncated-WF and WF.
We next compare the convergence rate of reshaped-WF with those of truncated-WF, WF and AltMinPhase. We run all of the algorithms with suggested parameter settings in the original codes. We generate signal and measurements in the same way as those in the first experiment with n = 1000, m = 8n.
All algorithms are seeded with reshaped-WF initialization. In Table 1, we list the number of iterations
and time cost for those algorithms to achieve the relative error of 10?14 averaged over 10 trials.
Clearly, reshaped-WF takes many fewer iterations as well as runing much faster than truncated-WF
and WF. Although reshaped-WF takes more iterations than AltMinPhase, it runs much faster than
7
AltMinPhase due to the fact that each iteration of AltMinPhase needs to solve a least-squares problem
that takes much longer time than a simple gradient update in reshaped-WF.
We also compare the performance of the above algorithms on the recovery of a real image from the
Fourier intensity measurements (2D CDP with the number of masks L = 16). The image (provided in
Suppl.D) is the Milky Way Galaxy with resolution 1920 ? 1080. Table 2 lists the number of iterations
and time cost of the above four algorithms to achieve the relative error of 10?15 . It can be seen that
reshaped-WF outperforms all other three algorithms in computational time cost. In particular, it is
two times faster than truncated-WF and six times faster than WF in terms of both the number of
iterations and computational time cost.
Table 2: Comparison of iterations and time cost among algorithms on Galaxy image (L = 16)
Algorithms
reshaped-WF
truncated-WF
WF
AltMinPhase
iterations
time cost(s)
65
141
160
567
420
998
110
213
We next demonstrate the robustness of reshaped-WF to noise corruption and compare it with truncatedWF. We consider the phase retrieval problem in imaging applications, where random Poisson noises
are often used to model the sensor and electronic noise
the noisy
q Fogel et al. [2013]. Specifically,
T
measurements of intensity can be expressed as yi = ? ? Poisson |ai x|2 /? , for i = 1, 2, ...m
where ? denotes the level of input noise, and Poisson(?) denotes a random sample generated by the
Poisson distribution with mean ?. It can be observed from Figure 4 that reshaped-WF performs better
than truncated-WF in terms of recovery accuracy under different noise levels.
100
reshaped-WF
truncated-WF
Relative error
10-1
noise level ,=1
10
-2
noise level
,=0.001
10-3
10-4
0
10
20
30
40
50
60
Number of iterations
Figure 4: Comparison of relative error under Poisson noise between reshaped-WF and truncated WF.
4
Conclusion
In this paper, we proposed reshaped-WF to recover a signal from a quadratic system of equations,
based on a nonconvex and nonsmooth quadratic loss function of absolute values of measurements.
This loss function sacrifices the smoothness but enjoys advantages in statistical and computational
efficiency. It also has potential to be extended in various scenarios. One interesting direction is to
extend such an algorithm to exploit signal structures (e.g., nonnegativity, sparsity, etc) to assist the
recovery. The lower-order loss function may offer great simplicity to prove performance guarantee in
such cases. Another interesting topic is to study stochastic version of reshaped-WF. We have observed
in preliminary experiments that the stochastic version of reshaped-WF converges fast numerically.
It will be of great interest to fully understand the theoretic performance of such an algorithm and
explore the reason behind its fast convergence.
Acknowledgments
This work is supported in part by the grants AFOSR FA9550-16-1-0077 and NSF ECCS 16-09916.
8
References
J. V. Burke, A. S. Lewis, and M. L. Overton. A robust gradient sampling algorithm for nonsmooth, nonconvex
optimization. SIAM Journal on Optimization, 15(3):751?779, 2005.
T. T. Cai, X. Li, and Z. Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded
wirtinger flow. arXiv preprint arXiv:1506.03382, 2015.
E. J. Cand?s, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude
measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241?
1274, 2013.
E. J. Cand?s, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE
Transactions on Information Theory, 61(4):1985?2007, 2015.
A. Chai, M. Moscoso, and G. Papanicolaou. Array imaging using intensity-only measurements. Inverse
Problems, 27(1), 2011.
Y. Chen and E. Candes. Solving random quadratic systems of equations is nearly as easy as solving linear
systems. In Advances in Neural Information Processing Systems (NIPS). 2015.
J. D. Donahue. Products and quotients of random variables and their applications. Technical report, DTIC
Document, 1964.
J. R. Fienup. Phase retrieval algorithms: a comparison. Applied Optics, 21(15):2758?2769, 1982.
F. Fogel, I. Waldspurger, and A. d?Aspremont. Phase retrieval for imaging problems. arXiv preprint
arXiv:1304.7735, 2013.
R. W. Gerchberg. A practical algorithm for the determination of phase from image and diffraction plane pictures.
Optik, 35:237, 1972.
X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In International Conference on
Artificial Intelligence and Statistics (AISTATS), 2011.
D. Gross, F. Krahmer, and R. Kueng. Improved recovery guarantees for phase retrieval from coded diffraction
patterns. Applied and Computational Harmonic Analysis, 2015.
K. C. Kiwiel. Convergence of the gradient sampling algorithm for nonsmooth nonconvex optimization. SIAM
Journal on Optimization, 18(2):379?388, 2007.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks.
In Advances in neural information processing systems (NIPS), 2012.
A. Y. Kruger. On fr?chet subdifferentials. Journal of Mathematical Sciences, 116(3):3325?3358, 2003.
P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. Advances in Neural
Information Processing Systems (NIPS), 2013.
P. Ochs, A. Dosovitskiy, T. Brox, and T. Pock. On iteratively reweighted algorithms for nonsmooth nonconvex
optimization in computer vision. SIAM Journal on Imaging Sciences, 8(1):331?372, 2015.
Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev. Phase retrieval with application
to optical imaging: a contemporary overview. IEEE Signal Processing Magazine, 32(3):87?109, 2015.
J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. arXiv preprint arXiv:1602.06664, 2016.
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed Sensing, Theory
and Applications, pages 210 ? 268, 2012.
I. Waldspurger, A. d?Aspremont, and S. Mallat. Phase recovery, maxcut and complex semidefinite programming.
Mathematical Programming, 149(1-2):47?81, 2015.
K. Wei. Solving systems of phaseless equations via kaczmarz methods: a proof of concept study. Inverse
Problems, 31(12):125008, 2015.
H. Zhang, Y. Chi, and Y. Liang. Provable non-convex phase retrieval with outliers: Median truncated wirtinger
flow. arXiv preprint arXiv:1603.03805, 2016.
9
| 6319 |@word trial:7 version:2 advantageous:1 norm:2 yi0:1 c0:4 moment:1 shechtman:2 reduction:1 contains:1 initial:2 document:1 ati:31 outperforms:3 recovered:1 z2:2 comparing:1 activation:1 written:1 readily:1 numerical:4 designed:4 plot:1 update:6 intelligence:1 fewer:1 plane:1 xk:5 core:1 fa9550:1 characterization:1 provides:1 contribute:2 simpler:3 zhang:4 mathematical:3 along:2 c2:2 rc:1 consists:1 khk:3 prove:1 introduce:1 kiwiel:2 sacrifice:1 mask:2 indeed:1 notably:1 expected:6 cand:8 dist:5 behavior:2 chi:1 cpu:1 equipped:1 becomes:1 spain:1 provided:3 linearity:1 notation:1 moreover:1 bounded:1 minimizes:1 eigenvector:2 developed:3 superb:1 guarantee:12 every:1 ti:1 demonstrates:2 unit:1 grant:1 phaseless:1 positive:3 negligible:1 ecc:1 local:1 cdp:8 pock:1 abuse:1 initialization:23 studied:2 suggests:2 range:1 statistically:1 averaged:1 acknowledgment:1 practical:1 practice:2 kaczmarz:2 definite:1 universal:1 empirical:4 significantly:1 hyperbolic:1 adapting:1 spite:1 convenience:2 close:2 demonstrated:1 attention:1 independently:1 convex:4 l:1 resolution:1 simplicity:2 recovery:8 pure:1 array:1 stability:1 justification:1 analogous:1 suppose:4 mallat:1 magazine:1 exact:1 programming:3 us:2 element:1 approximated:1 particularly:1 ochs:2 satisfying:4 cut:1 observed:3 preprint:4 region:1 sun:2 contemporary:1 gross:2 intuition:3 substantial:1 complexity:12 chet:3 solving:8 efficiency:4 easily:1 various:4 jain:1 mn2:3 fast:8 describe:1 artificial:1 neighborhood:4 kai:2 solve:3 valued:4 larger:1 say:1 otherwise:1 compressed:1 statistic:1 reshaped:66 transform:1 noisy:3 advantage:5 cai:2 propose:1 product:1 fr:3 loop:7 combining:1 achieve:3 adapts:1 intuitive:1 waldspurger:3 dist2:2 chai:2 convergence:19 regularity:2 optimum:4 sutskever:1 converges:5 help:1 derive:1 develop:1 pose:1 syr:2 measured:1 solves:1 netrapalli:3 recovering:4 implemented:1 involves:1 come:2 implies:1 quotient:1 exhibiting:1 direction:7 closely:1 stochastic:3 sgn:4 require:2 fix:1 preliminary:1 proposition:1 anticipate:1 leastsquares:1 bypassing:1 burke:2 hold:1 around:1 sufficiently:1 ground:1 wright:1 exp:3 great:3 seed:1 achieves:5 adopt:4 estimation:2 applicable:2 establishes:1 minimization:3 clearly:1 sensor:1 gaussian:9 rather:2 kueng:1 focus:2 indicates:1 mainly:1 contrast:3 attains:1 sense:1 wf:135 yingbin:1 minimizers:1 lowercase:1 typically:1 relegated:1 voroninski:1 overall:1 among:3 classification:1 eldar:1 development:1 brox:1 equal:2 once:1 sampling:3 chapman:1 represents:2 nearly:1 future:1 nonsmooth:14 report:1 develops:1 sanghavi:1 dosovitskiy:1 tightly:1 phase:21 organization:1 interest:2 introduces:1 semidefinite:1 behind:1 strohmer:1 amenable:1 overton:1 conduct:1 euclidean:1 phaselift:1 desired:1 theoretical:2 cost:17 entry:1 krizhevsky:2 successful:4 characterize:2 eec:2 corrupted:1 extendable:1 vershynin:1 international:1 siam:3 together:1 again:1 central:1 leading:3 li:2 potential:2 stabilize:1 coefficient:2 competitive:2 recover:4 candes:11 contribution:2 majorization:1 square:10 accuracy:5 convolutional:1 yield:9 weak:1 rectified:1 corruption:1 explain:1 reach:3 galaxy:2 naturally:1 proof:10 recovers:3 improves:4 organized:1 carefully:1 higher:2 miao:1 follow:1 improved:3 wei:2 furthermore:6 stage:3 hand:2 trust:1 nonsmoothness:2 lack:1 runing:1 behaved:1 subdifferentials:1 concept:1 requiring:1 true:5 counterpart:1 hence:4 seeded:1 alternating:2 nonzero:1 iteratively:1 reweighted:1 outline:2 theoretic:1 demonstrate:4 optik:1 performs:2 pro:1 image:4 harmonic:1 recently:3 sigmoid:1 behaves:4 empirically:2 overview:1 cohen:1 function1:1 extend:1 numerically:7 measurement:32 refer:3 ai:20 dft:1 smoothness:2 pm:3 similarly:4 mathematics:1 maxcut:1 soltanolkotabi:1 stable:1 longer:1 surface:3 etc:1 curvature:3 showed:1 recent:1 quartic:1 scenario:1 certain:1 nonconvex:13 success:3 yi:20 seen:2 converge:2 signal:25 reduces:2 smooth:3 technical:2 faster:9 characterized:1 determination:1 offer:1 long:2 retrieval:15 coded:2 vision:1 poisson:7 arxiv:8 iteration:18 suppl:9 c1:2 whereas:1 median:1 khk2:3 rest:1 comment:1 huishuai:1 flow:9 spirit:1 wirtinger:9 bengio:1 enough:1 identically:1 easy:2 fft:1 affect:1 relu:1 idea:1 cn:2 intensive:1 i7:1 rwf:1 papanicolaou:1 six:3 expression:2 assist:2 gb:1 suffer:1 matlab:1 deep:2 generally:1 informally:1 kruger:2 locally:1 differentiability:1 reduced:1 generate:4 exist:1 nsf:1 sign:8 discrete:1 affected:1 four:3 terminology:1 threshold:5 capital:1 verified:1 thresholded:1 ram:1 imaging:5 relaxation:1 geometrically:2 fraction:1 run:6 inverse:2 everywhere:1 letter:2 throughout:1 reader:1 almost:1 electronic:1 diffraction:3 scaling:2 superdifferential:2 bound:3 hi:2 guaranteed:2 convergent:1 quadratic:9 optic:1 segev:1 declared:1 fourier:3 argument:2 min:1 altminphase:8 optical:1 department:2 developing:1 conjugate:1 describes:1 smaller:1 qu:1 intuitively:1 outlier:1 computationally:2 equation:7 count:3 mance:1 rewritten:1 promoting:1 observe:1 spectral:6 appropriate:2 stepsize:1 alternative:1 encounter:1 robustness:1 jn:2 original:2 denotes:2 calculating:1 exploit:1 k1:2 establish:1 erfc:2 objective:3 concentration:1 diagonal:1 hai:5 exhibit:1 gradient:47 distance:1 topic:1 argue:1 reason:1 boldface:2 provable:1 code:2 ratio:1 minimizing:2 liang:2 equivalently:1 truncates:1 statement:1 design:5 unknown:1 upper:4 observation:1 descent:5 truncated:36 flop:3 extended:1 communication:1 hinton:1 milky:1 rn:5 intensity:3 syracuse:4 connection:1 z1:2 fogel:2 imagenet:1 barcelona:1 nip:4 suggested:4 below:2 pattern:2 sparsity:2 summarize:1 max:1 explanation:1 power:2 critical:1 event:1 indicator:1 shaper:1 mn:5 scheme:1 imply:1 picture:1 concludes:1 carried:1 aspremont:2 deviate:1 review:1 geometric:5 tangent:1 relative:7 afosr:1 asymptotic:1 loss:38 fully:1 lowerorder:1 interesting:2 versus:1 fienup:4 bypass:1 land:1 bordes:1 supported:1 truncation:13 transpose:1 keeping:1 enjoys:3 free:1 side:1 allow:1 formal:1 understand:1 taking:1 correspondingly:1 absolute:3 sparse:2 distributed:1 benefit:1 ghz:1 dimension:4 transition:1 kz:5 adopts:4 simplified:1 log3:1 transaction:1 obtains:1 keep:1 global:9 gerchberg:2 corroborates:1 xi:5 alternatively:1 why:4 table:4 learn:1 zk:1 robust:3 contributes:1 complex:10 aistats:1 yi2:1 main:1 linearly:2 bounding:1 noise:11 krahmer:1 referred:1 intel:1 ny:2 sub:1 nonnegativity:1 explicit:1 wish:1 obeying:1 donahue:1 theorem:5 rectifier:2 sensing:1 list:2 glorot:2 consist:1 gained:1 lifting:1 magnitude:9 illustrates:2 kx:1 dtic:1 chen:11 cx:1 explore:2 saddle:1 likely:2 kxk:9 expressed:1 scalar:1 truth:1 satisfies:2 relies:1 lewis:1 ma:1 goal:2 formulated:1 viewed:1 considerable:1 specifically:4 reducing:1 lemma:4 called:1 perfor:1 arises:1 |
5,880 | 632 | Analog VLSI Implementation of
Multi-dimensional Gradient Descent
David B. Kirk, Douglas Kerns, Kurt Fleischer, Alan H. Barr
California Institute of Technology
Beckman Institute 350-74
Pasadena, CA 91125
E-mail: dkIDegg.gg . cal tech. edu
Abstract
We describe an analog VLSI implementation of a multi-dimensional
gradient estimation and descent technique for minimizing an onchip scalar function fO. The implementation uses noise injection and multiplicative correlation to estimate derivatives, as in
[Anderson, Kerns 92]. One intended application of this technique
is setting circuit parameters on-chip automatically, rather than
manually [Kirk 91]. Gradient descent optimization may be used
to adjust synapse weights for a backpropagation or other on-chip
learning implementation. The approach combines the features of
continuous multi-dimensional gradient descent and the potential
for an annealing style of optimization. We present data measured
from our analog VLSI implementation.
1
Introduction
This work is similar to [Anderson, Kerns 92], but represents two advances. First, we
describe the extension of the technique to multiple dimensions. Second, we demonstrate an implementation of the multi-dimensional technique in analog VLSI, and
provide results measured from the chip. Unlike previous work using noise sources
in adaptive systems, we use the noise as a means of estimating the gradient of a
function f(y), rather than performing an annealing process [Alspector 88]. We also
estimate gr-;:dients continuously in position and time, in contrast to [Umminger 89]
and [J abri 91], which utilize discrete position gradient estimates.
789
790
Kirk, Kerns, Fleischer, and Barr
It is interesting to note the existence of related algorithms, also presented in this
volume [Cauwenberghs 93] [Alspector 93] [Flower 93]. The main difference is that
our implementation operates in continuous time, with continuous differentiation
and integration operators. The other approaches realize the integration and differentiation processes as discrete addition and subtraction operations, and use unit
perturbations. [Cauwenberghs 93] provides a detailed derivation of the convergence
and scaling properties of the discrete approach, and a simulation. [Alspector 93]
provides a description of the use of the technique as part of a neural network hardware architecture, and provides a simulation. [Flower 93] derived a similar discrete
algorithm from a node perturbation perspective in the context of multi-layered feedforward networks. Our work is similar in spirit to [Dembo 90] in that we don't make
any explicit assumptions about the "model" that is embodied in the function fO.
The function may be implemented as a neural network. In that case, the gradient
descent is on-chip learning of the parameters of the network.
We have fabricated a working chip containing the continuous-time multidimensional gradient descent circuits. This paper includes chip data for individual circuit components, as well as the entire circuit performing multi-dimensional
gradient descent and annealing.
2
The Gradient Estimation Technique
d/dt
d/dt
Figure 1: Gradient estimation technique from [Anderson, Kerns 92]
Anderson and Kerns [Anderson, Kerns 92] describe techniques for one-dimensional
gradient estimation in analog hardware. The gradient is estimated by correlating
(using a multiplier) the output of a scalar function f( v(t)) with a noise source
n(t), as shown in Fig. 1. The function input y(t) is additively "contaminated" by
the noise n(t) to produce v(t) = y(t) + n(t). A scale factor B is used to set the
scale of the noise to match the function output, which improves the signal-to-noise
ratio. The signals are "high-pass" filtered to approximate differentiation (shown
as d/ dt operators in Fig. 1) directly before the multiplication. The results of the
multiplication are "low-pass" filtered to approximate integration.
The gradient estimate is integrated over time, to smooth out some of the noise and
to damp the response. This smoothed estimate is compared with a "zero" reference,
using an amplifier A, and the result is fed back to the input, as shown in Fig. 2.
Th~ contents of Fig. 1 are represented by the "Gradient Estimation" box in Fig. 2.
We have chosen to implement the multi-dimensional technique in analog VLSI. We
Analog VLSI Implementation of Multi-dimensional Gradient Descent
Gradient
Estimation
[
Jdt
"zero"
Figure 2: Closing the loop: performing gradient descent using the gradient estimate.
will not reproduce here the one-dimensional analysis from [Anderson, Kerns 92]'
but summarize some ofthe more important results, and provide a multi-dimensional
derivation. [Anderson 92] provides a more detailed theoretical discussion.
3
Multi-dimensional Derivation
The multi-dimensional gradient descent operation that we are approximating can
be written as follows:
(1)
where y and y' are vectors, and the solution is obtained continuously in time t,
rather than at discrete ti. The circuit described in the block diagram in Fig. 1
computes an approximation to the gradient:
(2)
We approximate the operations of differentiation and integration in time by realizable high-pass and low-pass filters, respectively. To see that Eq. 2 is valid, and that
this result is useful for approximating Eq. 1, we sketch an N-dimensional extension
of [Anderson 92]. Using the chain rule,
d
dtf 0l.(t)
Assuming nj(t)
~
~
of
Y3
+ !let)) = L.-J (yj(t) + nj(t)) ~
.
3
(3)
yj (t), the rhs is approximated to produce
ddt f0l.(t)+n(t)) =
Lnj(t)oo~
.
Y3
(4)
3
Multiplying both sides by ni(t), and taking the expectation integral operator E[ ]
of each side,
E
[nitt) ~ f 0t(t) + !!(t?)1= E [n;(t) ~>;(t) :~ ]
(5)
If the noise sources ni(t) and nj (t) are unc.orrelated, nat) is independent of nj (t)
when i =P j, and the sum on the right has a contribution only when i = j,
E [ni(t) :t f 0l.(t)
1
+ net)) = E
[n~(t)n~(t) :~ 1
(6)
791
792
Kirk, Kerns, Fleischer, and Barr
1
of = a\l f
d f ~(t) + n(t)) ~ anE [ ni(t)(7)
dt
UYi
The expectation operator E[] can be used to smooth random variations of the noise
nj(t). So, we have
(8)
Since the descent rate k is arbitrary, we can absorb a into k. Using equation 8, we
can approximate the gradient descent technique as follows:
y~(t) ~ -k E
4
[ni(t) ~ f (1t(t)
+ n(t)) 1
(9)
Elements of the Multi-dimensional Implementation
We have designed, fabricated, and tested a chip which allows us to test these ideas.
The chip implementation can be decomposed into six distinct parts:
noise source(s): an analog VLSI circuit which produces a noise function. An independent, correlation-free noise source is needed for each input dimension,
designated ni(t). The noise circuit is described in [Alspector 91].
target function: a scalar function f(Yl , Y2, ... , YN) of N input variables, bounded
below, which is to be minimized [Kirk 91]. The circuit in this case is a 4dimensional variant of the bump circuit described in [Delbriick 91]. In the
general case, this fO can be any scalar function or error metric, computed
by some circuit. Specifically, the function may be a neural network.
input signal(s): the inputs Yi(t) to the function fO. These will typically be onchip values, or real-world inputs.
multiplier circuit(s): the multiplier computes the correlation between the noise
values and the function output. Offsets in the multiplication appear as
systematic errors in the gradient estimate, so it is important to compensate
for the offsets. Linearity is not especially important, although monotonicity
is critical. Ideally, the multiplication will also have a "tanh-like" character,
limiting the output range for extreme inputs.
integrator: an integration over time is approximated by a low-pass filter
differentiator: the time derivatives of the noise signals and the function are approximated by a high-pass filter.
The N inputs, Yi(t), are additively "contaminated" with the noise signals, ni(t), by
capacitive coupling, producing Vi(t) = Yi(t) + ni(t), the inputs to the function fO.
The function output is differentiated, as are the noise functions. Each differentiated
noise signal is correlated with the differentiated function output, using the multipliers. The results are low-pass filtered, providing N partial derivative estimates,
for the N input dimensions, shown for 4 dimensions in Fig. 3.
The function fO is implemented as an 4-dimensional extension of Delbriick's
[D~lbriick 91] bump circuit. Details ofthe N-dimensional bump circuit can be found
in [Kirk 93]. For learning and other applications, the function fO can implement
some other error metric to be minimized.
Analog VLSI Implementation of Multi-dimensional Gradient Descent
,............................................................................................:
:: ...........................................................................................
::
......- - - - - - - - - - - - n : : ,.........................................................................................;. ..
? !
.L.!
~'
n1(t) -
n2(t)
! r. ? ? ? ? ? ? ? ? ? ? ? ?. ? ? ? ? ? ? ? ? ? ? ? ? ? . . . . . . . . . . . . . . . . .
~ro
:
n4(t)
:
!:
d/dt
Jdt
v1(t)
y2(t) ----4
I
.
d/dt
y1(t)
..
.
v2(t)
~-+---=-'::....j
\0 ???????????????????????????????????????????????????????????????????????????????? , .......... ...
f(.)
y3(t)
y4(t)
Figure 3: Block diagram for a 4-dimensional gradient estimation circuit.
5
Chi p Results
We have tested chips implementing the gradient estimation and gradient descent
techniques described in this paper. Figure 4 shows the gradient estimate, without
the closed loop descent process. Figure 5 shows the trajectories of two state variables
during the 2D gradient descent process. Figure 6 shows the gradient descent process
in operation on a 2D bump surface, and Fig. 7 shows how, using appropriate choice
of noise scale, we can perform annealing using the gradient estimation hardware.
,..
"
"
06
i:
10
DO
i:
"
'1
L-__~-----___~
10
~
..
DO
..!
.....~~
~
g
?11
11
DO
"
."
?06
?18
11
'"
018
TUD! (RCIKlda)
'"
'"
lOS
08
'I
'"
011
007
nm.(",oodI}
I"
'"
Figure 4: Measured Chip Data: 1D Gradient Estimate. Upper curves are 1D bump
output as the input yet) is a slow triangle wave. Lower curves are gradient estimates.
(left) raw data, and (right) average of 1024 runs.
793
794
Kirk, Kerns, Fleischer, and Barr
11
?1'
?lJ
?1'
1.
?17
i;>
:>
i
?11
~
.,
:>
.. ~
\V.~
?1'
.,
~~i
?'1
?31
.,.
?' 1
?16
lit
III
'"
1ID 1" ' -}
I.~
"
III
011
"'
1ID 1"'_}
I~
1.01
Figure 5: Measured Chip Data: 2D Gradient Descent. The curves above show the
function optimization by gradient descent for 2 variables. Each curve represents
the path of one of the state variables ?..(t) from some initial values to the values for
which the function 10 is minimized. (left) raw data, and (right) average of S runs.
6
Conclusions
We have implemented an analog VLSI structure for performing continuous multidimensional gradient descent, and the gradient estimation uses only local information. The circuitry is compact and easily extensible to higher dimensions. This
implementation leads to on-chip multi-dimensional optimization, such as is needed
to perform on-chip learning for a hardware neural network . We can also perform a
kind of annealing by adding a schedule to the scale of the noise input. Our approach
also has some drawbacks, however. The gradient estimation is sensitive to the input offsets in the multipliers and integrators, since those offsets result in systematic
errors. Also, the gradient estimation technique adds noise to the input signals.
We hope that with only small additional circuit complexity, the performance of
analog VLSI circuits can be greatly increased by permitting them to be intrinsically
adaptive . On-chip implementation of an approximate gradient descent technique is
an important step in this direction.
Acknowledgements
This work was supported in part oy an AT&T Bell Laboratories Ph.D. Fellowship,
and by grants from Apple, DEC, Hewlett Packard, and IBM . Additional support
was provided by NSF (ASC-S9-20219), as part of the NSF/DARPA STC for Computer Graphics and Scientific Visualization. All opinions, findings, conclusions, or
recommendations expressed in this document are those of the author and do not
necessarily reflect the views of the sponsoring agencies.
References
[Alspector 93] Alspector, J., R. Meir, B. Yuhas, and A. Jayakumar, "A Parallel Gradient
Analog VLSI Implementation of Multi-dimensional Gradient Descent
Inltlol Stote
Figure 6: Measured Chip Data: 2D Gradient Descent. Here we see the results for
2D gradient descent on a 2D bump surface . Both the bump surface and t.he descent
path are actual data measured from our chips .
Figure 7: Measured Chip Data: 2D Gradient Descent and Annealing. Here we see
the effects of varying the amplitude of the noise. The dots represent points along
the optimization pat.h . At left, with small magnitude noise, the process descends
to a local minimum. At right, with larger magnitude, the descent process escapes
to the global minimum. A schedule of gradually decreasing noise amplitude could
reduce the probability of getting caught in undesirable local minima, and increase
the probability of converging to a small region near a more desirable minimum, or
even the global minimum.
795
796
Kirk, Kerns, Fleischer, and Barr
Descent Method for Learning in Analog VLSI Neural Networks," in Advances in
Neural Information Processing Systems, Vol. 5, Morgan Kaufman, San Mateo, CA,
1993.
[Alspector 91] Alspector, J., J. W. Gannett, S. Haber, M. B. Parker, and R. Chu, "A
VLSI-Efficient Technique for Generating Multiple Uncorrelated Noise Sources and
Its Application to Stochastic Neural Networks," IEEE Transactions on Circuits and
Systems, Vol.38, no.l, pp.l09-123, January, 1991.
[Alspector 88] Alspector, J., B. Gupta, and R. B. Allen, "Performance of a stochastic
learning microchip," in Advances in Neural Information Processing Systems, vol. I,
Denver Colorado, Nov. 1988. D. S. Touretzky, ed., Morgan Kauffman Publishers,
1989, pp. 748-760.
[Anderson, Kerns 92] Anderson, Brooke P., and Douglas Kerns, "Using Noise Injection
and Correlation in Analog Hardware to Estimate Gradients," submitted to IEEE
Transactions on Circuits and Systems I: Fundamental Theory and Applications.
[Anderson 92] Anderson, Brooke P., "Low-pass Filters as Expectation Operators for Multiplicative Noise," submitted to IEEE Transactions on Circuits and Systems I:
Fundamental Theory and Applications.
[Cauwenberghs 93] Cauwenberghs, Gert, "A Fast Stochastic Error-Descent Algorithm for
Supervised Learning and Optimization," in Advances in Neural Information Processing Systems, Vol. 5, Morgan Kaufman, San Mateo, CA, 1993.
[Delbriick 91] Delbriick, Tobias, "'Bump' Circuits for Computing Similarity and Dissimilarity of Analog Voltages," Proceedings of International Joint Conference on Neural
Networks, July 8-12, 1991, Seattle Washington, pp 1-475-479. (Extended version
as Caltech Computation and Neural Systems Memo Number 10.)
[Dembo 90] Dembo, A., and T. Kailath, "Model-Free Distributed Learning," IEEE Transactions on Neural Networks, Vol. 1, No.1, pp. 58-70, 1990.
[Flower 93] Flower, B., and M. Jabri, "Summed Weight Neuron Perturbation: An O(n)
Improvement over Weight Perturbation," in Advances in Neural Information Processing Systems, Vol. 5, Morgan Kaufman, San Mateo, CA, 1993.
[Jabri 91] Jabri, M., S. Pickard, P. Leong, Z. Chi, and B. Flower, "Architectures and Implementations of Right Ventricular Apex Signal Classifiers for Pacemakers," IEEE
Neural Information Processing Systems 1991 (NIPS 91), Morgan Kaufman, San
Diego, 1991.
[Kerns 92] Kerns, Douglas, "A Compact Noise Source for VLSI Applications," submitted to IEEE Transactions on Circuits and Systems I: Fundamental Theory and
Applications.
[Kirk 91] Kirk, David, Kurt Fleischer, and Alan Barr, "Constrained Optimization Applied
to the Parameter Setting Problem for Analog Circuits," IEEE Neural Information
Processing Systems 1991 (NIPS 91), Morgan Kaufman, San Diego, 1991.
[Kirk 93] Kirk, David, "Accurate and Precise Computation using Analog VLSI, with Applications to Computer Graphics and Neural Networks," Ph.D. Thesis, California
Institute of Technology, Caltech-CS-TR-93-??, June, 1993.
[Mead 89] Mead, Carver, "Analog VLSI and Neural Systems," Addison-Wesley, 1989.
[Platt 89] Platt, John, "Constrained Optimization for Neural Networks and Computer
Graphics," Ph.D. Thesis, California Institute of Technology, Caltech-CS-TR-89-07,
June, 1989.
[Umminger 89] Umminger, Christopher B., and Steven P. DeWeerth, "Implementing Gradient Following in Analog VLSI," Advanced Research in VLSI, MIT Press, Boston,
1989, pp. 195-208.
| 632 |@word version:1 additively:2 simulation:2 tr:2 initial:1 document:1 kurt:2 yet:1 chu:1 written:1 john:1 realize:1 designed:1 pacemaker:1 dembo:3 filtered:3 provides:4 node:1 along:1 uyi:1 microchip:1 yuhas:1 combine:1 alspector:10 multi:15 integrator:2 chi:2 decomposed:1 decreasing:1 automatically:1 actual:1 provided:1 estimating:1 bounded:1 linearity:1 circuit:22 kind:1 kaufman:5 finding:1 differentiation:4 fabricated:2 nj:5 y3:3 multidimensional:2 ti:1 ro:1 classifier:1 platt:2 unit:1 grant:1 yn:1 appear:1 producing:1 before:1 local:3 id:2 mead:2 path:2 onchip:2 mateo:3 range:1 yj:2 block:2 implement:2 backpropagation:1 bell:1 kern:15 unc:1 undesirable:1 layered:1 cal:1 operator:5 s9:1 context:1 caught:1 rule:1 gert:1 variation:1 limiting:1 target:1 diego:2 colorado:1 us:2 element:1 approximated:3 steven:1 region:1 agency:1 complexity:1 ideally:1 tobias:1 triangle:1 easily:1 darpa:1 joint:1 chip:17 represented:1 derivation:3 distinct:1 fast:1 describe:3 larger:1 net:1 loop:2 description:1 getting:1 los:1 seattle:1 convergence:1 produce:3 generating:1 oo:1 coupling:1 measured:7 eq:2 implemented:3 descends:1 c:2 direction:1 drawback:1 filter:4 stochastic:3 opinion:1 implementing:2 barr:6 extension:3 bump:8 circuitry:1 estimation:12 beckman:1 tanh:1 sensitive:1 hope:1 mit:1 rather:3 varying:1 voltage:1 derived:1 june:2 improvement:1 tech:1 contrast:1 greatly:1 realizable:1 entire:1 integrated:1 typically:1 lj:1 pasadena:1 vlsi:18 reproduce:1 constrained:2 integration:5 summed:1 washington:1 manually:1 abri:1 represents:2 lit:1 minimized:3 contaminated:2 tud:1 jdt:2 escape:1 individual:1 intended:1 n1:1 amplifier:1 asc:1 adjust:1 brooke:2 extreme:1 hewlett:1 chain:1 accurate:1 integral:1 partial:1 carver:1 theoretical:1 increased:1 extensible:1 gr:1 graphic:3 damp:1 fundamental:3 international:1 systematic:2 yl:1 continuously:2 thesis:2 reflect:1 nm:1 containing:1 jayakumar:1 derivative:3 style:1 potential:1 lnj:1 includes:1 vi:1 multiplicative:2 view:1 closed:1 cauwenberghs:4 wave:1 parallel:1 contribution:1 ni:8 ofthe:2 pickard:1 raw:2 multiplying:1 trajectory:1 apple:1 submitted:3 fo:7 touretzky:1 ed:1 pp:5 intrinsically:1 improves:1 schedule:2 amplitude:2 back:1 wesley:1 higher:1 dt:6 supervised:1 response:1 synapse:1 box:1 anderson:12 correlation:4 deweerth:1 working:1 sketch:1 christopher:1 scientific:1 effect:1 multiplier:5 y2:2 laboratory:1 during:1 sponsoring:1 gg:1 demonstrate:1 allen:1 denver:1 volume:1 analog:19 he:1 closing:1 dot:1 apex:1 similarity:1 surface:3 add:1 perspective:1 yi:3 caltech:3 morgan:6 minimum:5 additional:2 subtraction:1 signal:8 july:1 multiple:2 desirable:1 alan:2 smooth:2 match:1 compensate:1 permitting:1 converging:1 variant:1 expectation:3 metric:2 ane:1 represent:1 dec:1 addition:1 fellowship:1 annealing:6 diagram:2 source:7 publisher:1 unlike:1 dtf:1 f0l:1 spirit:1 near:1 feedforward:1 iii:2 leong:1 architecture:2 l09:1 reduce:1 idea:1 fleischer:6 six:1 differentiator:1 useful:1 detailed:2 ph:3 hardware:5 meir:1 nsf:2 estimated:1 ddt:1 discrete:5 vol:6 douglas:3 utilize:1 v1:1 sum:1 run:2 scaling:1 ventricular:1 performing:4 injection:2 designated:1 character:1 n4:1 gradually:1 equation:1 visualization:1 needed:2 addison:1 fed:1 operation:4 v2:1 differentiated:3 appropriate:1 existence:1 capacitive:1 especially:1 approximating:2 gradient:47 mail:1 assuming:1 y4:1 ratio:1 minimizing:1 providing:1 memo:1 implementation:15 perform:3 upper:1 neuron:1 descent:29 pat:1 january:1 extended:1 precise:1 y1:1 perturbation:4 smoothed:1 arbitrary:1 delbriick:4 david:3 california:3 nip:2 flower:5 below:1 kauffman:1 summarize:1 packard:1 haber:1 critical:1 advanced:1 technology:3 embodied:1 gannett:1 acknowledgement:1 multiplication:4 interesting:1 oy:1 uncorrelated:1 ibm:1 supported:1 free:2 side:2 institute:4 taking:1 distributed:1 curve:4 dimension:5 valid:1 world:1 computes:2 author:1 adaptive:2 san:5 transaction:5 approximate:5 compact:2 nov:1 absorb:1 monotonicity:1 global:2 correlating:1 don:1 continuous:5 ca:4 necessarily:1 stc:1 jabri:3 main:1 rh:1 noise:29 n2:1 fig:8 parker:1 slow:1 position:2 explicit:1 kirk:12 offset:4 gupta:1 adding:1 magnitude:2 nat:1 dissimilarity:1 boston:1 expressed:1 scalar:4 recommendation:1 kailath:1 content:1 specifically:1 operates:1 pas:8 support:1 tested:2 correlated:1 |
5,881 | 6,320 | Efficient state-space modularization for planning:
theory, behavioral and neural signatures
Daniel McNamee, Daniel Wolpert, M?t? Lengyel
Computational and Biological Learning Lab
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ, United Kingdom
{d.mcnamee|wolpert|m.lengyel}@eng.cam.ac.uk
Abstract
Even in state-spaces of modest size, planning is plagued by the ?curse of dimensionality?. This problem is particularly acute in human and animal cognition given
the limited capacity of working memory, and the time pressures under which planning often occurs in the natural environment. Hierarchically organized modular
representations have long been suggested to underlie the capacity of biological
systems 1,2 to efficiently and flexibly plan in complex environments. However, the
principles underlying efficient modularization remain obscure, making it difficult to
identify its behavioral and neural signatures. Here, we develop a normative theory
of efficient state-space representations which partitions an environment into distinct
modules by minimizing the average (information theoretic) description length of
planning within the environment, thereby optimally trading off the complexity of
planning across and within modules. We show that such optimal representations
provide a unifying account for a diverse range of hitherto unrelated phenomena at
multiple levels of behavior and neural representation.
1
Introduction
In a large and complex environment, such as a city, we often need to be able to flexibly plan so that we
can reach a wide variety of goal locations from different start locations. How might this problem be
solved efficiently? Model-free decision making strategies 3 would either require relearning a policy,
determining which actions (e.g. turn right or left) should be chosen in which state (e.g. locations in
the city), each time a new start or goal location is given ? a very inefficient use of experience resulting
in prohibitively slow learning (but see Ref. 4). Alternatively, the state-space representation used for
determining the policy can be augmented with extra dimensions representing the current goal, such
that effectively multiple policies can be maintained 5 , or a large ?look-up table? of action sequences
connecting any pair of start and goal locations can be represented ? again leading to inefficient use of
experience and potentially excessive representational capacity requirements.
In contrast, model-based decision-making strategies rely on the ability to simulate future trajectories
in the state space and use this in order to flexibly plan in a goal-dependent manner. While such
strategies are data- and (long term) memory-efficient, they are computationally expensive, especially
in state-spaces for which the corresponding decision tree has a large branching factor and depth 6 .
Endowing state-space representations with a hierarchical structure is an attractive approach to
reducing the computational cost of model-based planning 7?11 and has long been suggested to be
a cornerstone of human cognition 1 . Indeed, recent experiments in human decision-making have
gleaned evidence for the use and flexible combination of ?decision fragments? 12 while neuroimaging
work has identified hierarchical action-value reinforcement learning in humans 13 and indicated that
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
dorsolateral prefrontal cortex is involved in the passive clustering of sequentially presented stimuli
when transition probabilities obey a ?community? structure 14 .
Despite such a strong theoretical rationale and empirical evidence for the existence of hierarchical
state-space representations, the computational principles underpinning their formation and utilization
remain obscure. In particular, previous approaches proposed algorithms in which the optimal statespace decomposition was computed based on the optimal solution in the original (non-hierarchical)
representation 15,16 . Thus, the resulting state-space partition was designed for a specific (optimal)
environment solution rather than the dynamics of the planning algorithm itself, and also required a
priori knowledge of the optimal solution to the planning problem (which may be difficult to obtain in
general and renders the resulting hierarchy obsolete). Here, we compute a hierarchical modularization
optimized for planning directly from the transition structure of the environment, without assuming
any a priori knowledge of optimal behavior. Our approach is based on minimizing the average
information theoretic description length of planning trajectories in an environment, thus explicitly
optimizing representations for minimal working memory requirements. The resulting representation
are hierarchically modular, such that planning can first operate at a global level across modules
acquiring a high-level ?rough picture? of the trajectory to the goal and, subsequently, locally within
each module to ?fill in the details?.
The structure of the paper is as follows. We first describe the mathematical framework for optimizing
modular state-space representations (Section 2), and also develop an efficient coding-based approach
to neural representations of modularised state spaces (Section 2.6). We then test some of the key
predictions of the theory in human behavioral and neural data (Section 3), and also describe how this
framework can explain several temporal and representational characteristics of ?task-bracketing? and
motor chunking in rodent electrophysiology (Section 4). We end by discussing future extensions and
applications of the theory (Section 5).
2
Theory
2.1
Basic definitions
In order to focus on situations which require flexible policy development based on dynamic goal
requirements, we primarily consider discrete ?multiple-goal? Markov decision processes (MDPs).
Such an MDP, M := {S, A, T , G}, is composed of a set of states S, a set of actions A (a subset
As of which is associated with each state s ? S), and transition function T which determines the
probability of transitioning to state sj upon executing action a in state si , p(sj |si , a) := T (si , a, sj ).
A task (s, g) is defined by a start state s ? S and a goal state g ? G and the agent?s objective is to
identify a trajectory of via states v which gets the agent from s to g. We define a modularization1
M of the state-space S to be a set of Boolean matrices M := {Mi }i=1...m indicating the module
membership of all states s ? S. That is, for all s ? S, there exists i ? 1, . . . , m such that
Mi (s) = 1, Mj (s) = 0 ?j 6= i. We assume this to form a disjoint cover of the state-space
(overlapping modular architectures will be explored in future work). We will abuse notation by
using the expression s ? M to indicate that a state s is a member of a module M . As our planning
algorithm P, we consider random search as a worst-case scenario although, in principle, our approach
applies to any algorithm such as dynamic programming or Q-learning 3 and we expect the optimal
modularization to depend on the specific algorithm utilized.
We describe and analyze planning as a Markov process. For planning, the underlying state-space is
the same as that of the MDP and the transition matrix T is a marginalization over a planning policy
?plan (which, here, we assume is the random policy ?rand (a|si ) := |A1s | )
i
Tij =
X
a
?plan (a|si ) T (si , a, sj )
(1)
Given a modularization M, planning at the global level is a Markov process MG corresponding to
a ?low-resolution? representation of planning in the underlying MDP where each state corresponds
1
This is an example of a ?propositional representation? 17,18 and is analogous to state aggregation or ?clustering? 19,20 in reinforcement learning which is typically accomplished via heuristic bottleneck discovery algorithms 21 . Our method is novel in that it does not require the optimal policy as an input and is founded on a
normative principle.
2
to a ?local? module Mi and the transition structure TG is induced from T via marginalization and
normalization 22 over the internal states of the local modules Mi .
2.2
Description length of planning
We use an information-theoretic framework 23,24 to define a measure, the (expected) description
length (DL) of planning, which can be used to quantify the complexity of planning P in the induced
global L(P|MG ) and local modules L(P|Mi ). We will compute the DL of planning, L(P), in a
non-modularized setting and outline the extension to modularized planning DL L(P|M) (elaborating
further in the supplementary material). Given a task (s, g) in an MDP, a solution v(n) to this task
(n)
(n)
is an n-state trajectory such that v1 = s and vn = g. The description length (DL) of this
trajectory is L(v(n) ) := ? log pplan (v(n) ). A task may admit many solutions corresponding to
different trajectories over the state-space thus we define the DL of the task (s, g) to be the expectation
over all trajectories which solve this task, namely
? X
h
i
X
L(s, g) := Ev,n L(v(n) ) = ?
p(v(n) |s, g) log p(v(n) |s, g)
(2)
n=1 v(n)
This is the (s, g)-th entry of the trajectory entropy matrix H of M. Remarkably, this can be expressed
in closed form 25 :
X
[H]sg =
[(I ? Tg )?1 ]sv Hv
(3)
v6=g
where T is the transition matrix of the planning Markov chain (Eq. 1), Tg is a sub-matrix corresponding to the elimination of the g-th column and row, and Hv is the local entropy Hv := H(Tv? ) at state
v. Finally, we define the description length L(P) of the planning process P itself over all tasks (s, g)
X
L(P) := Es,g [L(s, g)] =
Ps Pg L(s, g)
(4)
(s,g)
where Ps and Pg are priors of the start and goal states respectively which we assume to be factorizable
P(s,g) = Ps Pg for clarity of exposition. In matrix notation, this can be expressed as L(P) = Ps H PgT
where Ps is a row-vector of start state probabilities and Pg is a row-vector of goal state probabilities.
The planning DL, L(P|M), of a nontrivial modularization of an MDP requires (1) the computation
of the DL of the global L(P|MG ) and the local planning processes L(P|Mi ) for global MG and
local Mi modular structures respectively, and (2) the weighting of these quantities by the correct
priors. See supplementary material for further details.
2.3
Minimum modularized description length of planning
Based on a modularization, planning can be first performed at the global level across modules, and
then subsequently locally within the subset of modules identified by the global planning process
(Fig. 1). Given a task (s, g) where s represents the start state and g represents the goal state, global
search would involve finding a trajectory in MG from the induced initial module (the unique Ms such
that Ms (s) = 1) to the goal module (Mg (g) = 1). The result of this search will be a global directive
across modules Ms ? ? ? ? ? Mg . Subsequently, local planning sub-tasks are solved within each
module in order to ?fill in the details?. For each module transition Mi ? Mj in MG , a local search
in Mi is accomplished by planning from an entrance state from the previous module, and planning
until an exit state for module Mj is entered. This algorithm is illustrated in Figure 1.
By minimizing the sum of the global L(P|MG ) and local DLs L(P|Mi ), we establish the optimal
modularization M? of a state-space for planning:
X
M? := arg min [L(P|M) + L(M)] , where L(P|M) := L(P|MG ) +
L(P|Mi )
(5)
M
i
Note that this formulation explicitly trades-off the complexity (measured as DL) of planning at the
global level, L(P|MG ), i.e. across modules, and at the local level, L(P|Mi ), i.e. within individual
modules (Fig. 1C-D). In principle, the representational cost of the modularization itself L(M) is also
3
part of the trade-off, but we do not consider it further here for two reasons. First, in the state-spaces
considered in this paper, it is dwarfed by the the complexities of planning, L(M) L(P|M)
(see the supplementary material for the mathematical characterization of L(M)). Second, it taxes
long-term rather than short-term memory, which is at a premium when planning 26,27 . Importantly,
although computing the DL of a modularization seems to pose significant computational challenges
by requiring the enumeration of a large number of potential trajectories in the environment (across
or within modules), in the supplementary material we show that it can be computed in a relatively
straightforward manner (the only nontrivial operation being a matrix inversion) using the theory of
finite Markov chains 22 .
2.4
Planning compression
The planning DL L(s, g) for a specific task (s, g) describes the expected difficulty in finding an
intervening trajectory v for a task (s, g). For example, in a binary coding scheme where we assign
binary sequences to each state, the expected length of string of random 0s and 1s corresponding to a
trajectory will be shorter in a modularized compared to a non-modularized representation. Thus, we
can examine the relative benefit of an optimal modularization, in the Shannon limit, by computing
the ratio of trajectory description lengths in modularized and non-modularized representations of
aFig.
task1 ?Explanation?
or environment 28 . In line with spatial cognition terminology 29 , we refer to this ratio as the
compression factor of the trajectory.
S
L(P|Mi )
L(P|M1 )
L(P|M2 )
L(P|M3 )
Time
.25
100
L(P|Mi )
0.25
.2
0.2
50
0
0
-50 -50
-1 -1 0
Number of modules
Modularized Soho Trajectories
.15
0.15
.1
0.1
.05
0.05
0
.5
0
0.5
0
F
60006
6000
40004
4000
20002
2000
0 00
-1 -1
1
1.5
2
3
2.5
Compression Factor
Soho trajectory compression factor
1
1
1.5
1
2
2
2
G
Withinvs Across-Module
Withinvs Across-Module
2.5
3
66
Degree centrality
E
Degree centrality (#connected states)
i
s3? ? ? g3
L(P|MG )
0 0
A
M cr
od os
ul s?
es
50
L(P|M)
L(P|MG )
s2? ? ? g2
i
Optimizing
Soho Modularization
Modularization
X
Planning?
Entropy
G
M3
EC Euclidean Distance (knats)
N
50m
D
X
Fraction
London?s Soho
Planning description length (nats)
C
s1? ? ? g1
Local
S
G
L(P|MG )
S/G
M2
M Wit
od hi
ul n ?
es
L(P)
G
Rank Sum Score (larger => more similar)
G
M1
Global
S
Probability
S
B
Modularized Planning
Local
EC Euclidean Distance
Global
Absolute difference in EC
Flat Planning
Rank Sum Score (larger => more similar)
A
1 1
55
44
33
22
11
00
1
22
33
1
Entropiccentrality
centrality (knats)
(bits)
?104
Entropic
2 2
Withinvs Across-Module
Withinvs Across-Module
Modules
Figure 1. Modularized
planning. dist
A. Schematic
exhibiting
how
planning,Modules
which could be highly
Normalized Euclidean
between EC
ordered by
module
Normalized
Euclidean
dist between
EC1 ordered
by module
complex using a flat
state space
representation
(left), can
be reformulated
into a hierarchical planning
1
process via a modularization (center and right). Boxes (circles or squares) show states, lines are
0.5
transitions (gray: potential transitions, black: transitions0.5
considered
in current plan). Once the ?global
directive? has been established by searching in a low-resolution
representation of the environment
0
(center), the agent can then proceed to ?fill in the details?
by solving a series of local planning
0
sub-tasks (right). Formulae along the bottom show the DL of the corresponding planning processes.
B. Given a modularization, a serial hierarchical planning process unfolds in time beginning with
a global search task followed by local sub-tasks. As each global/local planning task is initiated in
series, there is a phasic increase in processing which scales with planning difficulty in the upcoming
module as quantified by the local DL, L(P|Mi ). C. Map of London?s Soho state-space, streets (lines,
with colors coding degree centrality) correspond to states (courtesy of Hugo Spiers). D. Minimum
expected planning DL of London?s Soho as a function of the number of modules (minimizing over
all modularizations with the given number of modules). Red: global, blue: local, black: total DL.
E. Histogram of compression factors of 200 simulated trajectories from randomly chosen start to
goal locations in London?s Soho. F. Absolute entropic centrality (EC) differences within and across
connected modules in the optimal modularization of the Soho state-space. G. Scatter plot of degree
and entropic centralities of all states in the Soho state-space.
4
2.5
Entropic centrality
The computation of the planning DL (Section 2.2) makes use of the trajectory entropy matrix H of a
Markov chain. Since H is composed of weighted sums of local entropies Hv , it suggests that we can
express the contribution of a particular state v to the planning DL by summing its terms for all tasks
(s, g). Thus, we define the entropic centrality, Ev , of a state v via
X
Ev =
Dsvg Hv
(6)
s,g
where
we have
made use of the fundamental tensor of a Markov chain D with components Dsvg =
?1
(I ? Tg ) sv . Note that task priors can easily be incorporated into this definition. The entropic
centrality (EC) of a state measures its importance to tasks across the domain and its gradient can
serve as a measure of ?subgoalness? for the planning process P. Indeed, we observed in simulations
that one strategy used by an optimal modularization to minimize planning complexity is to ?isolate?
planning DL within rather than across modules, such that EC changes more across than within
modules (Fig. 1F). This suggests that changes in EC serve as a good heuristic for identifying modules.
Furthermore, EC is tightly related to the graph-theoretic notion of degree centrality (DC). When transitions are undirected and are deterministically related to action, degree centrality deg(v) corresponds
to the number of states which are accessible from a state v. In such circumstances and assuming a
random policy, we have
X
1
log(deg(v))
(7)
Ev =
Dsvg
deg(v)
s,g
The ECs and DCs of all states in a state-space reflecting the topology of London?s Soho are plotted in
Fig. 1G and show a strong correlation in agreement with this analysis. In Section 3.2 we test whether
this tight relationship, together with the intuition developed above about changes in EC demarcating
approximate module boundaries, provides a normative account of recently observed correlations
between DC and human hippocampal activity during spatial navigation 30 .
2.6
Efficient coding in modularized state-spaces
In addition to ?compressing? the planning process, modularization also enables a neural channel to
transmit information (for example, a desired state sequence) in a more efficient pattern of activity
using a hierarchical entropy coding strategy 31 whereby contextual codewords signaling the entrance
to and exit from a module constrain the set of states that can be transmitted to those within a
module thus allowing them to be encoded with shorter description lengths according to their relative
probabilities 28 (i.e. a state that forms part of many trajectory will have a shorter description length
than one that does not). Assuming that neurons take advantage of these strategies in an efficient
code 32 , several predictions can be made with regard to the representational characteristics of neuronal
populations encoding components of optimally modularized state-spaces. We suggest that the phasic
neural responses (known as ?start? and ?stop? signals) which have been observed to encase learned
behavioral sequences in a wide range of control paradigms across multiple species 33?36 serve this
purpose in modularized control architectures. Our theory makes several predictions regarding the
temporal dynamics and population characteristics of these start/stop codes. First, it determines
a specific temporal pattern of phasic start/stop activity as an animal navigates using an optimally
modularized representation of a state-space. Second, neural representations for the start signals should
depend on the distribution of modules, while the stop codes should be sensitive to the distribution
of components within a module. Considering the minimum average description length of each of
these distribution, we can make predictions regarding how much neural resources (for example, the
number of neurons) should be assigned to represent each of these start/stop variables. We verify these
predictions in published neural data 36,34 in Section 4.
3
3.1
Route compression and state-space segmentation in spatial cognition
Route compression
We compared the compression afforded by optimal modularization to a recent behavioral study
examining trajectory compression during mental navigation 29 . In this task, students at the University
5
of Toronto were asked to mentally navigate between a variety of start and goal locations on their
campus and the authors computed the (inverse) ratio between the duration of this mental navigation
and the typical time it would physically take to walk the same distance. Although mental navigation
time was substantially smaller than physical time, it was not simply a constant fraction of it, but
instead the ratio of the two (the compression factor) became higher with longer route length (Fig. 2A).
In fact, while in the original study only a linear relationship between compression factor and physical
route length was considered, reanalysing the data yielded a better fit by a logarithmic function
(R2 = 0.69 vs. 0.46).
In order to compare our theory with these data, we computed compression factors between the
optimally modularized and the non-modularized version of an environment. This was because
students were likely to have developed a good knowledge of the campus? spatial structure, and so
we assumed they used an approximately optimal modularization for mental navigation, while the
physical walking time could not make use of this modularization and was bound to the original
non-modularized topology of the campus. As we did not have access to precise geographical data
about the part of the U. Toronto campus that was used in the original experiment, we ran our algorithm
on a part of London Soho which had been used in previous studies of human navigation 30 . Based on
200 simulated trajectories over route lengths of 1 to 10 states, we found that our compression factor
a similar dependence on route length2 (Fig. 2B) and again was better fit by a logarithmic
Fig.showed
2 ?SpatialCog?
versus a linear function (R2 = 0.82 vs. 0.72, respectively).
A
B
Human Navigated Trajectories
Simulated Modularized Trajectories
Empirical Data (Bonasia et al., 2016)
40
2
4
6
8
30
25
20
15
10
R2 = 0.69
R2 = 0.82
5
0
10
Route length (states)
C L(P|Mi )
40
1
1.6
Compression
Compressionfactor
Compression
Compression
factor
35
R2 = 0.46
R2 = 0.72
Modularized Soho Simulations
1.7
0
200
400
600
Route length (m)
800
1000
35
30
1.4
25
0.6
1.3
? 20
1.2
0.4
15
1.1
10
R2 = 0.46
R2 = 0.72
1
0.9
vs. Centrality Correlations
Empirical Data (Bonasia et al., 2016)
0.8
1.5
Compression
Modularized Soho Simulations
0
2
4
6
8
Route length (states)
10
0.2
R2 = 0.69
R2 = 0.82
5
00
0
200
Degree
400
600
800
Closeness
Between
Route length (m)
1000
Figure 2. Modularized representations for spatial cognition. A. Compression factor as a function
of route length for navigating the U. Toronto campus (reproduced from Ref. 29) with linear (grey)
and logarithmic fits (blue). B. Compression factors for the optimal modularization in the London
Soho environment. C. Spearman correlations between changes in local planning DL, L(P|Mi ), and
changes in different graph-theoretic measures of centrality.
3.2
Local planning entropy and degree centrality
We also modeled a task in which participants, who were trained to be familiar with the environment,
navigated between randomly chosen locations in a virtual reality representation of London?s Soho
by pressing keys to move through the scenes 30 . Functional magnetic resonance imaging during
this task showed that hippocampal activity during such self-planned (but not guided) navigation
correlated most strongly with changes in a topological state ?connectedness? measure known as
degree centrality (DC, compared to other standard graph-theoretic measures of centrality such as
?betweenness? and ?closeness?). Although changes in DC are not directly relevant to our theory, we
can show that they serve as a good proxy for a fundamental quantity in the theory, planning DL (see
Eq. 7), which in turn should be reflected in neural activations.
To relate the optimal modularization, the most direct prediction of our theory, to neural signals, we
made the following assumptions (see also Fig. 1B). 1. Planning (and associated neural activity)
occurs upon entering a new module (as once a plan is prepared, movement across the module can
be automatic without the need for further planning, until transitioning to a new module). 2. The
magnitude of neural activity is related to the local planning DL, L(P|Mi ), of the module (as the
higher the entropy, the more trajectories need to be considered, likely activating more neurons with
different tunings for state transitions, or state-action combinations 37 , resulting in higher overall
2
Note that the absolute scale of our compression factor is different from that found in the experiment because
we did not account for the trivial compression that comes from the simple fact that it is just generally faster to
move mentally than physically.
6
Fig. 3 ?Start/stop?
C
Simulated ?Task-Responsive? Neurons
Overtrained (Modularized)
1
1.5
0
0.5
Tu
r
1
D
E
LEFT
LEVER
FREEZE
LEFT
LEVER
LEFT
LEVER
MAG.
ENTRY
REST
RIGHT
LEVER
GROOM
RIGHT
LEVER
RIGHT
LEVER
LICK
START?
SEQUENCE
STOP?
SEQUENCE
LICK?
R
Percentage
MAG.
ENTRY
1.5
10
10
5
5
0
0
0
0.5
1
G
Ar oa
riv l?
al
ue
Tu
r
n
C
Tu art
rn
En
d
G
Ar oa
riv l?
al
0.5
15
1.5
F
3
2
1
Fi
rs
t
Fi
na
l
5
0
0
St
t
ar
St
at
e
G
10
15
n
S
Tu tart
rn
En
d
5
Firing Rate (Hz)
Firing Rate (Hz)
10
0
EXPLORE
Before Acquisition
15
G
at
e
St
ar
t
C
ue
Firing Rate (Hz)
15
Tu
r
Overtrained
Description length (nats)
Empirical Data
G
at
e
St
ar
t
C
ue
B
Modularization
G
Ar oa
riv l?
al
T-Maze Task
n
S
Tu tart
rn
En
d
A
Figure 3. Neural activities encoding module boundaries. A. T-maze task in which tone determines
the location of the reward (reproduced from Ref. 34). Inset: the model?s optimal modularization
of the discretized T-maze state-space. Note that the critical junction has been extracted to form its
own module which isolates the local planning DL caused by the split in the path. B. Empirical data
exhibiting the temporal pattern of task-bracketing in dorsolateral striatal (DLS) neurons. Prior to
learning the task, ensemble activity was highly variable both spatially and temporally throughout
the behavioral trajectory. Reproduced from Ref. 34. C. Simulated firing rates of ?task-responsive?
neurons after and before acquiring an optimal modularization. D. The optimal modularization
(colored states are in the same module) of a proposed state-space for an operant conditioning task 36 .
Note that the lever pressing sequences form their own modules and thus require specialized start/stop
codes. E. Analyses of striatal neurons suggesting that a larger percentage of neurons encoded lever
sequence initiations compared to terminations, and that very few encoded both. Reproduced from
Ref. 36. F. Description lengths of start/stop codes in the optimal modularization.
activity in the population). Furthermore, as before, we also assume that participants were sufficiently
familiar with Soho that they used the optimal modularization (as they were specifically trained in
the experiment). Having established that under the optimal modularization entropic centrality (EC)
tends to change more across than within modules (Fig. 1F), and also that EC is closely related to DC
(Fig. 1G), the theory predicts that neural activity should be timed to changes in DC. Furthermore,
the DLs of successive modules along a trajectory will in general be positively correlated with the
differences between their DLs (due to the unavoidable ?regression to the mean? effect3 ). Noting that
the planning DL of a module is just the (weighted) average EC of its states (see Section 2.5), the
theory thus more specifically predicts a positive correlation between neural activity (representing the
DLs of modules) and changes in EC and therefore changes in DC ? just as seen in experiments.
We verified these predictions numerically by quantifying the correlation of changes in each centrality
measure used in the experiments with transient changes in local planning complexity as computed
in the model (Fig. 2C). Across simulated trajectories, we found that changes in DC had a strong
correlation with changes in local planning entropy (mean ?deg = 0.79) that was significantly higher
(p < 10?5 , paired t-tests) than the correlation with the other centrality measures. We predict that even
higher correlations with neural activity could be achieved if planning DL according to the optimal
modularization, rather than DC, was used directly as a regressor in general linear models of the fMRI
data.
3
Transitioning to a module with larger/smaller DL will cause, on average, a more positive/negative DL
change compared to the previous module DL.
7
0
0.5
1
1.5
4
Task-bracketing and start/stop signals in striatal circuits
Several studies have examined sequential action selection paradigms and identified specialized taskbracketing 33,34 and ?start? and ?stop? neurons that are invariant to a wide range of motivational,
kinematic, and environmental variables 36,35 . Here, we show that task-bracketing and start/stop signals
arise naturally from our model framework in two well-studied tasks, one involving their temporal 34
and the other their representational characteristics 36 .
In the first study, as rodents learned to navigate a T-maze (Fig. 3A), neural activity in dorsolateral
striatum and infralimbic cortex became increasingly crystallized into temporal patterns known as
?task-brackets? 34 . For example, although neural activity was highly variable before learning; after
learning the same neurons phasically fired at the start of a behavioral sequence, as the rodent turned
into and out of the critical junction, and finally at the final goal position where reward was obtained.
Based on the optimal modularization for the T-maze state-space (Fig. 3A inset), we examined
spike trains from a simulated neurons whose firing rates scaled with local planning entropy (see
supplementary material) and this showed that initially (i.e. without modularization, Fig. 3C right)
the firing rate did not reflect any task-bracketing but following training (i.e. optimal modularization,
Fig. 3C left) the activity exhibited clear task-bracketing driven by the initiation or completion of a
local planning process. These result show a good qualitative match to the empirical data (Fig. 3B,
from Ref. 34) showing that task-bracketing patterns of activity can be explained as the result of
module start/stop signaling and planning according to an optimal modular decomposition of the
environment.
In the second study, rodents engaged in an operant conditioning paradigm in which a sequence of eight
presses on a left or right lever led to the delivery of high or low rewards 36 . After learning, recordings
from nigrostriatal circuits showed that some neurons encoded the initiation, and fewer appeared to
encode the termination, of these action sequences. We used our framework to compute the optimal
modularization based on an approximation to the task state-space (Fig. 3D) in which the rodent could
be in many natural behavioral states (red circles) prior to the start of the task. Our model found
that the lever action sequences were extracted into two separate modules (blue and green circles).
Given a modularization, a hierarchical entropy coding strategy uses distinct neural codewords for the
initiation and termination of each module (Section 2.6). Importantly, we found that the description
lengths of start codes was longer than that of stop codes (Fig. 3F). Thus, an efficient allocation of
neural resources predicts more neurons encoding start than stop signals, as seen in the empirical data
(Fig. 3E). Intuitively, more bits are required to encode starts than stops in this state-space due to the
relatively high level of entropic centrality of the ?rest? state (where many different behaviors may
be initiated, red circles) compared to the final lever press state (which is only accessible from the
previous Lever press state and where the rodent can only choose to enter the magazine or return to
?rest?). These results show that the start and stop codes and their representational characteristics arise
naturally from an efficient representation of the optimally modularized state space.
5
Discussion
We have developed the first framework in which it is possible to derive state-space modularizations
that are directly optimized for the efficiency of decision making strategies and do not require
prior knowledge of the optimal policy before computing the modularization. Furthermore, we
have identified experimental hallmarks of the resulting modularizations, thereby unifying a range
of seemingly disparate results from behavioral and neurophysiological studies within a common,
principled framework. An interesting future direction would be to study how modularized policy
production may be realized in neural circuits. In such cases, once a representation has been established,
neural dynamics at each level of the hierarchy may be used to move along a state-space trajectory via
a sequence of attractors with neural adaptation preventing backflow 38 , or by using fundamentally
non-normal dynamics around a single attractor state 39 . The description length that lies at the heart of
the modularization we derived was based on a specific planning algorithm, random search, which
may not lead to the modularization that would be optimal for other, more powerful and realistic,
planning algorithms. Nevertheless, in principle, our approach is general in that it can take any
planning algorithm as the component that generates description lengths, including hybrid algorithms
that combine model-based and model-free techniques that likely underlie animal and human decision
making 40 .
8
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
Lashley K. In: Jeffress LA, editor. Cerebral Mechanisms in Behavior, New York: Wiley, pp 112?147. 1951.
Simon H, Newell A. Human Problem Solving. Longman Higher Education, 1971.
Sutton R, Barto A. Reinforcement Learning: An Introduction. MIT Press, 1998.
Stachenfeld K et al. Advances in Neural Information Processing Systems, 2014.
Moore AW et al. IJCAI International Joint Conference on Artificial Intelligence 2:1318?1321, 1999.
Lengyel M, Dayan P. Advances in Neural Information Processing Systems, 2007.
Dayan P, Hinton G. Advances in Neural Information Processing Systems, 1992.
Parr R, Russell S. Advances in Neural Information Processing Systems, 1997.
Sutton R et al. Artificial Intelligence 112:181 ? 211, 1999.
Hauskrecht M et al. In: Uncertainty in Artificial Intelligence. 1998.
Rothkopf CA, Ballard DH. Frontiers in Psychology 1:1?13, 2010.
Huys QJM et al. Proceedings of the National Academy of Sciences 112:3098?3103, 2015.
Gershman SJ et al. Journal of Neuroscience 29:13524?31, 2009.
Schapiro AC et al. Nature Neuroscience 16:486?492, 2013.
Foster D, Dayan P. Machine Learning pp 325?346, 2002.
Solway A et al. PLoS Computational Biology 10:e1003779, 2014.
Littman ML et al. Journal of Artificial Intelligence Research 9:1?36, 1998.
Boutilier C et al. Journal of Artificial Intelligence Research 11:1?94, 1999.
Singh SP et al. Advances in Neural Information Processing Systems, 1995.
Kim KE, Dean T. Artificial Intelligence 147:225?251, 2003.
Simsek O, Barto AG. Advances in Neural Information Processing Systems, 2008.
Kemeny JG, Snell JL. Finite Markov Chains. Springer-Verlag, 1983.
Balasubramanian V. Neural Computation 9:349?368, 1996.
Rissanen J. Information and Complexity in Statistical Modeling. Springer, 2007.
Kafsi M et al. IEEE Transactions on Information Theory 59:5577?5583, 2013.
Todd M et al. Advances in Neural Information Processing Systems, 2008.
Otto AR et al. Psychological Science 24:751?61, 2013.
MacKay D. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
Bonasia K et al. Hippocampus 26:9?12, 2016.
Javadi AH et al. Nature Communications in press, 2016.
Rosvall M, Bergstrom CT. Proceedings of the National Academy of Sciences 105:1118?1123, 2008.
Ganguli D, Simoncelli E. Neural Computation 26:2103?2134, 2014.
Barnes TD et al. Nature 437:1158?61, 2005.
Smith KS, Graybiel AM. Neuron 79:361?374, 2013.
Fujii N, Graybiel AM. Science 301:1246?1249, 2003.
Jin X, Costa RM. Nature 466:457?462, 2010.
Stalnaker TA et al. Frontiers in Integrative Neuroscience 4:12, 2010.
Russo E et al. New Journal of Physics 10, 2008.
Hennequin G et al. Neuron 82:1394?406, 2014.
Daw ND et al. Nature Neuroscience 8:1704?11, 2005.
9
| 6320 |@word version:1 inversion:1 compression:21 seems:1 hippocampus:1 nd:1 termination:3 grey:1 integrative:1 simulation:3 r:1 eng:1 decomposition:2 pg:4 pressure:1 thereby:2 initial:1 series:2 fragment:1 united:1 score:2 daniel:2 mag:2 elaborating:1 task1:1 current:2 contextual:1 od:2 si:6 scatter:1 activation:1 realistic:1 partition:2 entrance:2 enables:1 motor:1 designed:1 plot:1 v:3 intelligence:6 obsolete:1 betweenness:1 fewer:1 tone:1 schapiro:1 beginning:1 smith:1 short:1 colored:1 mental:4 characterization:1 provides:1 location:9 toronto:3 successive:1 fujii:1 mathematical:2 along:3 direct:1 qualitative:1 combine:1 behavioral:9 manner:2 tart:2 expected:4 indeed:2 behavior:4 planning:74 examine:1 dist:2 discretized:1 balasubramanian:1 td:1 curse:1 enumeration:1 considering:1 spain:1 motivational:1 underlying:3 unrelated:1 notation:2 circuit:3 campus:5 hitherto:1 string:1 substantially:1 developed:3 finding:2 ag:1 hauskrecht:1 temporal:6 prohibitively:1 scaled:1 rm:1 uk:1 utilization:1 control:2 underlie:2 before:5 positive:2 engineering:1 local:26 todd:1 tends:1 limit:1 striatum:1 despite:1 encoding:3 sutton:2 initiated:2 firing:6 path:1 abuse:1 approximately:1 might:1 black:2 connectedness:1 bergstrom:1 studied:1 quantified:1 examined:2 suggests:2 k:1 limited:1 range:4 huys:1 russo:1 unique:1 cb2:1 signaling:2 empirical:7 significantly:1 groom:1 suggest:1 get:1 selection:1 map:1 dean:1 center:2 courtesy:1 straightforward:1 flexibly:3 duration:1 resolution:2 wit:1 ke:1 identifying:1 m2:2 importantly:2 fill:3 hennequin:1 population:3 searching:1 notion:1 riv:3 analogous:1 transmit:1 hierarchy:2 magazine:1 programming:1 us:1 agreement:1 expensive:1 particularly:1 utilized:1 walking:1 predicts:3 bottom:1 observed:3 module:59 solved:2 hv:5 worst:1 compressing:1 connected:2 plo:1 trade:2 movement:1 russell:1 ran:1 principled:1 intuition:1 environment:15 complexity:7 nats:2 reward:3 asked:1 littman:1 rosvall:1 cam:1 dynamic:6 signature:2 trained:2 depend:2 solving:2 tight:1 singh:1 serve:4 upon:2 exit:2 efficiency:1 easily:1 joint:1 represented:1 train:1 distinct:2 demarcating:1 describe:3 london:8 jeffress:1 artificial:6 formation:1 crystallized:1 whose:1 modular:6 heuristic:2 supplementary:5 solve:1 larger:4 encoded:4 otto:1 ability:1 g1:1 itself:3 final:2 reproduced:4 seemingly:1 sequence:13 mg:14 dwarfed:1 advantage:1 pressing:2 adaptation:1 tu:6 relevant:1 turned:1 entered:1 fired:1 tax:1 representational:6 stachenfeld:1 academy:2 description:17 intervening:1 ijcai:1 requirement:3 p:5 executing:1 derive:1 develop:2 ac:2 pose:1 completion:1 measured:1 eq:2 strong:3 trading:1 indicate:1 quantify:1 come:1 direction:1 guided:1 exhibiting:2 closely:1 correct:1 subsequently:3 human:10 transient:1 material:5 elimination:1 virtual:1 education:1 require:5 activating:1 assign:1 snell:1 biological:2 extension:2 frontier:2 underpinning:1 sufficiently:1 considered:4 around:1 normal:1 plagued:1 cognition:5 predict:1 parr:1 entropic:8 purpose:1 sensitive:1 city:2 weighted:2 rough:1 mit:1 rather:4 cr:1 barto:2 encode:2 derived:1 focus:1 rank:2 contrast:1 kim:1 am:2 inference:1 ganguli:1 dependent:1 dayan:3 membership:1 typically:1 initially:1 arg:1 overall:1 flexible:2 priori:2 development:1 animal:3 plan:7 spatial:5 resonance:1 art:1 mackay:1 once:3 having:1 biology:1 represents:2 look:1 excessive:1 future:4 fmri:1 stimulus:1 fundamentally:1 primarily:1 few:1 randomly:2 composed:2 tightly:1 national:2 individual:1 familiar:2 attractor:2 highly:3 kinematic:1 navigation:7 bracket:1 chain:5 experience:2 shorter:3 modest:1 tree:1 euclidean:4 walk:1 circle:4 plotted:1 desired:1 timed:1 theoretical:1 minimal:1 psychological:1 column:1 modeling:1 boolean:1 planned:1 cover:1 ar:7 tg:4 cost:2 subset:2 entry:3 examining:1 optimally:5 aw:1 sv:2 st:4 geographical:1 fundamental:2 international:1 accessible:2 off:3 physic:1 regressor:1 connecting:1 together:1 na:1 again:2 lever:12 unavoidable:1 reflect:1 choose:1 prefrontal:1 admit:1 inefficient:2 leading:1 return:1 account:3 potential:2 suggesting:1 coding:6 student:2 explicitly:2 caused:1 performed:1 lab:1 closed:1 analyze:1 red:3 start:28 aggregation:1 participant:2 simon:1 contribution:1 minimize:1 square:1 became:2 characteristic:5 efficiently:2 who:1 correspond:1 identify:2 ensemble:1 trajectory:29 lengyel:3 published:1 ah:1 explain:1 reach:1 definition:2 acquisition:1 pp:2 involved:1 naturally:2 associated:2 mi:18 stop:17 costa:1 ec1:1 knowledge:4 color:1 dimensionality:1 organized:1 directive:2 segmentation:1 reflecting:1 higher:6 ta:1 reflected:1 response:1 rand:1 formulation:1 box:1 strongly:1 furthermore:4 just:3 until:2 correlation:9 working:2 o:1 overlapping:1 indicated:1 gray:1 mdp:5 requiring:1 normalized:2 verify:1 assigned:1 entering:1 spatially:1 moore:1 solway:1 illustrated:1 attractive:1 during:4 branching:1 self:1 ue:3 maintained:1 whereby:1 m:3 hippocampal:2 outline:1 theoretic:6 gleaned:1 passive:1 rothkopf:1 hallmark:1 isolates:1 novel:1 recently:1 fi:2 endowing:1 common:1 specialized:2 mentally:2 functional:1 hugo:1 physical:3 soho:16 conditioning:2 cerebral:1 jl:1 m1:2 numerically:1 significant:1 refer:1 freeze:1 cambridge:3 enter:1 automatic:1 tuning:1 had:2 jg:1 access:1 acute:1 cortex:2 longer:2 navigates:1 own:2 recent:2 showed:4 optimizing:3 driven:1 scenario:1 route:11 verlag:1 initiation:4 binary:2 discussing:1 accomplished:2 transmitted:1 minimum:3 seen:2 paradigm:3 signal:6 multiple:4 simoncelli:1 faster:1 match:1 long:4 serial:1 paired:1 schematic:1 prediction:7 involving:1 basic:1 regression:1 circumstance:1 expectation:1 physically:2 histogram:1 normalization:1 represent:1 achieved:1 addition:1 remarkably:1 bracketing:7 extra:1 operate:1 rest:3 exhibited:1 induced:3 isolate:1 hz:3 undirected:1 recording:1 member:1 noting:1 split:1 variety:2 marginalization:2 fit:3 psychology:1 architecture:2 identified:4 topology:2 regarding:2 bottleneck:1 whether:1 expression:1 ul:2 render:1 reformulated:1 proceed:1 cause:1 york:1 action:10 cornerstone:1 tij:1 generally:1 clear:1 involve:1 boutilier:1 prepared:1 locally:2 percentage:2 s3:1 neuroscience:4 disjoint:1 blue:3 diverse:1 discrete:1 express:1 key:2 terminology:1 nevertheless:1 rissanen:1 navigated:2 clarity:1 verified:1 graybiel:2 longman:1 v1:1 imaging:1 graph:3 fraction:2 sum:4 inverse:1 powerful:1 uncertainty:1 throughout:1 vn:1 delivery:1 decision:8 dorsolateral:3 bit:2 bound:1 hi:1 ct:1 followed:1 topological:1 yielded:1 activity:16 nontrivial:2 barnes:1 constrain:1 afforded:1 flat:2 scene:1 generates:1 simulate:1 min:1 relatively:2 department:1 tv:1 according:3 combination:2 spearman:1 remain:2 across:18 describes:1 smaller:2 increasingly:1 g3:1 making:6 s1:1 explained:1 invariant:1 intuitively:1 operant:2 heart:1 chunking:1 computationally:1 resource:2 turn:2 mechanism:1 phasic:3 end:1 junction:2 operation:1 eight:1 obey:1 hierarchical:9 magnetic:1 responsive:2 centrality:20 modularization:39 existence:1 original:4 clustering:2 overtrained:2 unifying:2 pgt:1 especially:1 establish:1 upcoming:1 tensor:1 objective:1 move:3 quantity:2 occurs:2 codewords:2 strategy:8 spike:1 dependence:1 realized:1 kemeny:1 gradient:1 navigating:1 distance:3 separate:1 simulated:7 capacity:3 a1s:1 street:1 oa:3 trivial:1 reason:1 assuming:3 phasically:1 length:26 code:8 modeled:1 relationship:2 ratio:4 minimizing:4 kingdom:1 difficult:2 neuroimaging:1 potentially:1 relate:1 striatal:3 lashley:1 negative:1 disparate:1 policy:10 allowing:1 neuron:15 markov:8 finite:2 jin:1 situation:1 lick:2 incorporated:1 precise:1 hinton:1 dc:10 rn:3 communication:1 community:1 spiers:1 propositional:1 pair:1 required:2 namely:1 optimized:2 learned:2 established:3 barcelona:1 daw:1 nip:1 able:1 suggested:2 pattern:5 ev:4 appeared:1 challenge:1 green:1 memory:4 explanation:1 including:1 critical:2 natural:2 rely:1 difficulty:2 hybrid:1 representing:2 scheme:1 mdps:1 temporally:1 picture:1 prior:6 sg:1 discovery:1 determining:2 relative:2 expect:1 rationale:1 interesting:1 allocation:1 versus:1 gershman:1 agent:3 degree:9 proxy:1 principle:6 editor:1 foster:1 obscure:2 production:1 row:3 free:2 wide:3 absolute:3 modularized:24 benefit:1 regard:1 boundary:2 dimension:1 depth:1 transition:11 unfolds:1 maze:5 preventing:1 author:1 made:3 reinforcement:3 founded:1 ec:15 transaction:1 sj:5 approximate:1 deg:4 global:17 sequentially:1 ml:1 summing:1 assumed:1 alternatively:1 search:6 table:1 reality:1 nature:5 mj:3 channel:1 ballard:1 ca:1 complex:3 domain:1 factorizable:1 did:3 sp:1 hierarchically:2 s2:1 arise:2 ref:6 positively:1 augmented:1 fig:20 neuronal:1 en:3 slow:1 wiley:1 sub:4 position:1 deterministically:1 lie:1 weighting:1 formula:1 transitioning:3 specific:5 navigate:2 inset:2 showing:1 normative:3 explored:1 pz:1 r2:10 evidence:2 dl:31 exists:1 closeness:2 sequential:1 effectively:1 importance:1 magnitude:1 relearning:1 rodent:6 wolpert:2 entropy:11 electrophysiology:1 logarithmic:3 simply:1 likely:3 explore:1 led:1 neurophysiological:1 expressed:2 ordered:2 v6:1 g2:1 applies:1 acquiring:2 springer:2 corresponds:2 newell:1 determines:3 environmental:1 extracted:2 dh:1 goal:16 quantifying:1 exposition:1 change:16 typical:1 specifically:2 reducing:1 total:1 specie:1 engaged:1 e:3 premium:1 m3:2 shannon:1 infralimbic:1 experimental:1 la:1 indicating:1 internal:1 statespace:1 phenomenon:1 correlated:2 |
5,882 | 6,321 | RETAIN: An Interpretable Predictive Model for
Healthcare using Reverse Time Attention Mechanism
Edward Choi? , Mohammad Taha Bahadori? , Joshua A. Kulas? ,
Andy Schuetz? , Walter F. Stewart? , Jimeng Sun?
?
?
Georgia Institute of Technology
Sutter Health
{mp2893,bahadori,jkulas3}@gatech.edu,
{schueta1,stewarwf}@sutterhealth.org, [email protected]
Abstract
Accuracy and interpretability are two dominant features of successful predictive
models. Typically, a choice must be made in favor of complex black box models
such as recurrent neural networks (RNN) for accuracy versus less accurate but
more interpretable traditional models such as logistic regression. This tradeoff
poses challenges in medicine where both accuracy and interpretability are important. We addressed this challenge by developing the REverse Time AttentIoN
model (RETAIN) for application to Electronic Health Records (EHR) data. RETAIN
achieves high accuracy while remaining clinically interpretable and is based on
a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g. key diagnoses). RETAIN mimics
physician practice by attending the EHR data in a reverse time order so that recent
clinical visits are likely to receive higher attention. RETAIN was tested on a large
health system EHR dataset with 14 million visits completed by 263K patients over
an 8 year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as RNN, and ease of interpretability
comparable to traditional models.
1
Introduction
The broad adoption of Electronic Health Record (EHR) systems has opened the possibility of
applying clinical predictive models to improve the quality of clinical care. Several systematic reviews
have underlined the care quality improvement using predictive analysis [7, 25, 5, 20]. EHR data
can be represented as temporal sequences of high-dimensional clinical variables (e.g., diagnoses,
medications and procedures), where the sequence ensemble represents the documented content of
medical visits from a single patient. Traditional machine learning tools summarize this ensemble into
aggregate features, ignoring the temporal and sequence relationships among the feature elements.
The opportunity to improve both predictive accuracy and interpretability is likely to derive from
effectively modeling temporality and high-dimensionality of these event sequences.
Accuracy and interpretability are two dominant features of successful predictive models. There is a
common belief that one has to trade accuracy for interpretability using one of three types of traditional
models [6]: 1) identifying a set of rules (e.g. via decision trees [27]), 2) case-based reasoning by
finding similar patients (e.g. via k-nearest neighbors [18] and distance metric learning [36]), and 3)
identifying a list of risk factors (e.g. via LASSO coefficients [15]). While interpretable, all of these
models rely on aggregated features, ignoring the temporal relation among features inherent to EHR
data. As a consequence, model accuracy is sub-optimal. Latent-variable time-series models, such as
[34, 35], account for temporality, but often have limited interpretation due to abstract state variables.
Recently, recurrent neural networks (RNN) have been successfully applied in modeling sequential
EHR data to predict diagnoses [30] and disease progression [11, 14]. But, the gain in accuracy from
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Less
Lessinterpretable
interpretable
End-to-End
End-to-End
Interpretable
Interpretable
End-to-End
End-to-End
/
/**
/
/**
?
?
'
'**
&
&**
?
?
'
'**
$
$**
+
+**
+
+**
$
$**
,
,**
"
"**
"
"**
(a) Standard attention model
(b) RETAIN model
Figure 1: Common attention models vs. RETAIN, using folded diagrams of RNNs. (a) Standard
attention mechanism: the recurrence on the hidden state vector vi hinders interpretation of the model.
(b) Attention mechanism in RETAIN: The recurrence is on the attention generation components (hi or
gi ) while the hidden state vi is generated by a simpler more interpretable output.
use of RNNs is at the cost of model output that is notoriously difficult to interpret. While there have
been several attempts at directly interpreting RNNs [19, 26, 8], these methods are not sufficiently
developed for application in clinical care.
We have addressed this limitation using a modeling strategy known as RETAIN, a two-level neural
attention model for sequential data that provides detailed interpretation of the prediction results while
retaining the prediction accuracy comparable to RNN. To this end, RETAIN relies on an attention
mechanism modeled to represent the behavior of physicians during an encounter. A distinguishing
feature of RETAIN (see Figure 1) is to leverage sequence information using an attention generation
mechanism, while learning an interpretable representation. And emulating physician behaviors,
RETAIN examines a patient?s past visits in reverse time order, facilitating a more stable attention
generation. As a result, RETAIN identifies the most meaningful visits and quantifies visit specific
features that contribute to the prediction.
RETAIN was tested on a large health system EHR dataset with 14 million visits completed by 263K
patients over an 8 year period. We compared predictive accuracy of RETAIN to traditional machine
learning methods and to RNN variants using a case-control dataset to predict a future diagnosis of
heart failure. The comparative analysis demonstrates that RETAIN achieves comparable performance
to RNN in both accuracy and speed and significantly outperforms traditional models. Moreover,
using a concrete case study and visualization method, we demonstrate how RETAIN offers an intuitive
interpretation.
2
Methodology
We first describe the structure of sequential EHR data and our notation, then follow with a general
framework for predictive analysis in healthcare using EHR, followed by details of the RETAIN method.
EHR Structure and our Notation. The EHR data of each patient can be represented as a timelabeled sequence of multivariate observations. Assuming we use r different variables, the n-th patient
(n)
(n)
of N total patients can be represented by a sequence of T (n) tuples (ti , xi ) 2 R ? Rr , i =
(n)
1, . . . , T (n) . The timestamps ti denotes the time of the i-th visit of the n-th patient and T (n) the
number of visits of the n-th patient. To minimize clutter, we describe the algorithms for a single patient
and have dropped the superscript (n) whenever it is unambiguous. The goal of predictive modeling
is to predict the label at each time step yi 2 {0, 1}s or at the end of the sequence y 2 {0, 1}s . The
number of labels s can be more than one.
For example, in disease progression modeling (DPM) [11], each visit of a patient visit sequence is
represented by a set of varying number of medical codes {c1 , c2 , . . . , cn }. cj is the j-th code from
the vocabulary C. Therefore, in DPM, the number of variables r = |C| and input xi 2 {0, 1}|C| is
2
a binary vector where the value one in the j-th coordinate indicates that cj was documented in i-th
visit. Given a sequence of visits x1 , . . . , xT , the goal of DPM is, for each time step i, to predict the
codes occurring at the next visit x2 , . . . , xT +1 , with the number of labels s = |C|.
In case of learning to diagnose (L2D) [30], the input vector xi consists of continuous clinical measures.
If there are r different measurements, then xi 2 Rr . The goal of L2D is, given an input sequence
x1 , . . . , xT , to predict the occurrence of a specific disease (s = 1) or multiple diseases (s > 1).
Without loss of generality, we will describe the algorithm for DPM, as L2D can be seen as a special
case of DPM where we make a single prediction at the end of the visit sequence.
In the rest of this section, we will use the abstract symbol RNN to denote any recurrent neural
network variants that can cope with the vanishing gradient problem [3], such as LSTM [23], GRU
[9], and IRNN [29], with any depth (number of hidden layers).
2.1
Preliminaries on Neural Attention Models
Attention based neural network models are being successfully applied to image processing [1, 32, 21,
37], natural language processing [2, 22, 33] and speech recognition [12]. The utility of the attention
mechanism can be seen in the language translation task [2] where it is inefficient to represent an
entire sentence with one fixed-size vector because neural translation machines finds it difficult to
translate the given sentence represented by a single vector.
Intuitively, the attention mechanism for language translation works as follows: given a sentence of
length S in the original language, we generate h1 , . . . , hS , to represent the words in the sentence. To
find the j-th word in the target language, we generate attentions ?ij for i = 1, . . . , S for each word in
P
the original sentence. Then, we compute the context vector cj = i ?ij hi and use it to predict the
j-th word in the target language. In general, the attention mechanism allows the model to focus on a
specific word (or words) in the given sentence when generating each word in the target language.
We rely on a conceptually similar temporal attention mechanism to generate interpretable prediction
models using EHR data. Our model framework is motivated by and mimics how doctors attend to a
patient?s needs and explore the patient record, where there is a focus on specific clinical information
(e.g., key risk factors) working from the present to the past.
2.2
Reverse Time Attention Model RETAIN
Figure 2 shows the high-level overview of our model, where a central feature is to delegate a
considerable portion of the prediction responsibility to the process for generating attention weights.
This is intended to address, in part, the difficulty with interpreting RNNs where the recurrent weights
feed past information to the hidden layer. Therefore, to consider both the visit-level and the variablelevel (individual coordinates of xi ) influence, we use a linear embedding of the input vector xi . That
is, we define
vi = Wemb xi ,
(Step 1)
where vi 2 Rm denotes the embedding of the input vector xi 2 Rr , m the size of the embedding dimension, Wemb 2 Rm?r the embedding matrix to learn. We can alternatively use more sophisticated
yet interpretable representations such as those derived from multilayer perceptron (MLP) [13, 28].
MLP has been used for representation learning in EHR data [10].
We use two sets of weights, one for the visit-level attention and the other for variable-level attention,
respectively. The scalars ?1 , . . . , ?i are the visit-level attention weights that govern the influence of
each visit embedding v1 , . . . , vi . The vectors 1 , . . . , i are the variable-level attention weights that
focus on each coordinate of the visit embedding v1,1 , v1,2 , . . . , v1,m , . . . , vi,1 , vi,2 , . . . , vi,m .
We use two RNNs, RNN? and RNN , to separately generate ??s and ?s as follows,
gi , gi
1 , . . . , g1
= RNN? (vi , vi
1 , . . . , v1 ),
>
w?
gj
ej =
+ b? , for j = 1, . . . , i
?1 , ?2 , . . . , ?i = Softmax(e1 , e2 , . . . , ei )
hi , hi 1 , . . . , h1 = RNN (vi , vi 1 , . . . , v1 )
j
= tanh W hj + b
3
for
j = 1, . . . , i,
(Step 2)
(Step 3)
.*
/*
5
?
4
?
?
?
2
3
&#
&)
&*
'#
')
,#
,)
,*
+#
+)
011&
Time
'*
+*
0112
1
$#
$)
$*
"#
")
"*
Figure 2: Unfolded view of RETAIN?s architecture: Given input sequence x1 , . . . , xi , we predict the
label yi . Step 1: Embedding, Step 2: generating ? values using RNN? , Step 3: generating values
using RNN , Step 4: Generating the context vector using attention and representation vectors, and
Step 5: Making prediction. Note that in Steps 2 and 3 we use RNN in the reversed time.
where gi 2 Rp is the hidden layer of RNN? at time step i, hi 2 Rq the hidden layer of RNN
at time step i and w? 2 Rp , b? 2 R, W 2 Rm?q and b 2 Rm are the parameters to learn.
The hyperparameters p and q determine the hidden layer size of RNN? and RNN , respectively.
Note that for prediction at each timestamp, we generate a new set of attention vectors ? and . For
simplicity of notation, we do not include the index for predicting at different time steps. In Step 2,
we can use Sparsemax [31] instead of Softmax for sparser attention weights.
As noted, RETAIN generates the attention vectors by running the RNNs backward in time; i.e., RNN?
and RNN both take the visit embeddings in a reverse order vi , vi 1 , . . . , v1 . Running the RNN
in reversed time order also offers computational advantages since the reverse time order allows us
to generate e?s and ?s that dynamically change their values when making predictions at different
time steps i = 1, 2, . . . , T . This ensures that the attention vectors are modified at each time step,
increasing the computational stability of the attention generation process.1
Using the generated attentions, we obtain the context vector ci for a patient up to the i-th visit as
follows,
i
X
ci =
?j j vj ,
(Step 4)
j=1
where denotes element-wise multiplication. We use the context vector ci 2 Rm to predict the true
label yi 2 {0, 1}s as follows,
bi = Softmax(Wci + b),
y
(Step 5)
s?m
s
where W 2 R
and b 2 R are parameters to learn. We use the cross-entropy to calculate the
classification loss as follows,
L(x1 , . . . , xT ) =
N
T (n)
1 X 1 X? >
yi log(b
yi ) + (1
N n=1 T (n) i=1
yi )> log(1
bi )
y
?
(1)
bi . In case of real-valued output
where we sum the cross entropy errors from all dimensions of y
yi 2 Rs , we can change the cross-entropy in Eq. (1) to, for example, mean squared error.
Overall, our attention mechanism can be viewed as the inverted architecture of the standard attention
mechanism for NLP [2] where the words are encoded by RNN and the attention weights are generated
by MLP. In contrast, our method uses MLP to embed the visit information to preserve interpretability
and uses RNN to generate two sets of attention weights, recovering the sequential information as
well as mimicking the behavior of physicians. Note that we did not use the timestamp of each visit
in our formulation. Using timestamps, however, provides a small improvement in the prediction
performance. We propose a method to use timestamps in Appendix A.
1
For example, feeding visit embeddings in the original order to RNN? and RNN will generate the same e1
and 1 for every time step i = 1, 2, . . . , T . Moreover, in many cases, a patient?s recent visit records deserve
more attention than the old records. Then we need to have ej+1 > ej which makes the process computationally
unstable for long sequences.
4
3
Interpreting RETAIN
Finding the visits that contribute to prediction are derived using the largest ?i , which is straightforward.
However, finding influential variables is slightly more involved as a visit is represented by an ensemble
of medical variables, each of which can vary in its predictive contribution. The contribution of each
variable is determined by v, and ?, and interpretation of ? alone informs which visit is influential
in prediction but not why.
We propose a method to interpret the end-to-end behavior of RETAIN. By keeping ? and values
fixed as the attention of doctors, we analyze changes in the probability of each label yi,1 , . . . , yi,s
in relation to changes in the original input x1,1 , . . . , x1,r , . . . , xi,1 , . . . , xi,r . The xj,k that yields the
largest change in yi,d will be the input variable with highest contribution. More formally, given the
sequence x1 , . . . , xi , we are trying to predict the probability of the output vector yi 2 {0, 1}s , which
can be expressed as follows
p(yi |x1 , . . . , xi ) = p(yi |ci ) = Softmax (Wci + b)
(2)
where ci 2 R denotes the context vector. According to Step 4, ci is the sum of the visit embeddings
v1 , . . . , vi weighted by the attentions ??s and ?s. Therefore Eq (2) can be rewritten as follows,
? ?X
?
i
?
p(yi |x1 , . . . , xi ) = p(yi |ci ) = Softmax W
?j j v j + b
(3)
m
j=1
Using the fact that the visit embedding vi is the sum of the columns of Wemb weighted by each
element of xi , Eq (3) can be rewritten as follows,
? ?X
?
i
r
?
X
p(yi |x1 , . . . , xi ) = Softmax W
?j j
xj,k Wemb [:, k] + b
j=1
= Softmax
?X
i X
r
k=1
xj,k ?j W
j=1 k=1
?
j
?
?
Wemb [:, k] + b
(4)
where xj,k is the k-th element of the input vector xj . Eq (4) can be completely deconstructed to the
variables at each input x1 , . . . , xi , which allows for calculating the contribution ! of the k-th variable
of the input xj at time step j ? i, for predicting yi as follows,
!(yi , xj,k ) = ?j W(
|
j
Wemb [:, k]) xj,k ,
{z
} |{z}
Contribution coefficient
(5)
Input value
where the index i of yi is omitted in the ?j and j . As we have described in Section 2.2, we are
generating ??s and ?s at time step i in the visit sequence x1 , . . . , xT . Therefore the index i is always
assumed for ??s and ?s. Additionally, Eq (5) shows that when we are using a binary input value, the
coefficient itself is the contribution. However, when we are using a non-binary input value, we need
to multiply the coefficient and the input value xj,k to correctly calculate the contribution.
4
Experiments
We compared performance of RETAIN to RNNs and traditional machine learning methods. Given
space constraints, we only report the results on the learning to diagnose (L2D) task and summarize the
disease progression modeling (DPM) in Appendix C. The RETAIN source code is publicly available
at https://github.com/mp2893/retain.
4.1 Experimental setting
Source of data: The dataset consists of electronic health records from Sutter Health. The patients
are 50 to 80 years old adults chosen for a heart failure prediction model study. From the encounter
records, medication orders, procedure orders and problem lists, we extracted visit records consisting
of diagnosis, medication and procedure codes. To reduce the dimensionality while preserving the
clinical information, we used existing medical groupers to aggregate the codes into input variables.
The details of the medical groupers are given in the Appendix B. A profile of the dataset is summarized
in Table 1.
5
Table 1: Statistics of EHR dataset. (D:Diagnosis, R:Medication, P:Procedure)
# of patients
263,683
Avg. # of codes in a visit
# of visits
14,366,030
Max # of codes in a visit
Avg. # of visits per patient
54.48
Avg. # of Dx codes in a visit
# of medical code groups
615 (D:283, R:94, P:238) Max # of Dx in a visit
3.03
62
1.83
42
Implementation details: We implemented RETAIN with Theano 0.8 [4]. For training the model, we
used Adadelta [38] with the mini-batch of 100 patients. The training was done in a machine equipped
with Intel Xeon E5-2630, 256GB RAM, two Nvidia Tesla K80?s and CUDA 7.5.
Baselines: For comparison, we completed the following models.
? Logistic regression (LR): We compute the counts of medical codes for each patient based on all
her visits as input variables and normalize the vector to zero mean and unit variance. We use the
resulting vector to train the logistic regression.
? MLP: We use the same feature construction as LR, but put a hidden layer of size 256 between
the input and output.
? RNN: RNN with two hidden layers of size 256 implemented by the GRU. Input sequences
x1 , . . . , xi are used. Logistic regression is applied to the top hidden layer. We use two layers of
RNN of to match the model complexity of RETAIN.
? RNN+?M : One layer single directional RNN (hidden layer size 256) along time to generate the
input embeddings v1 , . . . , vi . We use the MLP with a single hidden layer of size 256 to generate
the visit-level attentions ?1 , . . . , ?i . We use the input embeddings v1 , . . . , vi as the input to the
MLP. This baseline corresponds to Figure 1a.
? RNN+?R : This is similar to RNN+?M but use the reverse-order RNN (hidden layer size 256)
to generate the visit-level attentions ?1 , . . . , ?i . We use this baseline to confirm the effectiveness
of generating the attentions using reverse time order.
The comparative visualization of the baselines are provided in Appendix D. We use the same
implementation and training method for the baselines as described above. The details on the hyperparameters, regularization and drop-out strategies for the baselines are described in Appendix B.
Evaluation measures: Model accuracy was measured by:
? Negative log-likelihood that measures the model loss on the test set. The loss can be calculated
by Eq (1).
? Area Under the ROC Curve (AUC) of comparing ybi with the true label yi . AUC is more
robust to imbalanced positive/negative prediction labels, making it appropriate for evaluation of
classification accuracy in the heart failure prediction task.
We also report the bootstrap (10,000 runs) estimate of the standard deviation of the evaluation
measures.
4.2
Heart Failure Prediction
Objective: Given a visit sequence x1 , . . . , xT , we predicted if a primary care patient will be
diagnosed with heart failure (HF). This is a special case of DPM with a single disease outcome at
the end of the sequence. Since this is a binary prediction task, we use the logistic sigmoid function
instead of the Softmax in Step 5.
Cohort construction: From the source dataset, 3,884 cases are selected and approximately 10
controls are selected for each case (28,903 controls). The case/control selection criteria are fully
described in the supplementary section. Cases have index dates to denote the date they are diagnosed
with HF. Controls have the same index dates as their corresponding cases. We extract diagnosis codes,
medication codes and procedure codes in the 18-months window before the index date.
Training details: The patient cohort was divided into the training, validation and test sets in a
0.75:0.1:0.15 ratio. The validation set was used to determine the values of the hyper-parameters. See
Appendix B for details of hyper-parameter tuning.
6
Model
LR
MLP
RNN
RNN+?M
RNN+?R
RETAIN
Table 2: Heart failure prediction performance of RETAIN and the baselines
Test Neg Log Likelihood
AUC
Train Time / epoch Test Time
0.3269 ? 0.0105
0.7900 ? 0.0111
0.15s
0.11s
0.2959 ? 0.0083
0.8256 ? 0.0096
0.25s
0.11s
0.2577 ? 0.0082
0.8706 ? 0.0080
10.3s
0.57s
0.2691 ? 0.0082
0.8624 ? 0.0079
6.7s
0.48s
0.2605 ? 0.0088
0.8717 ? 0.0080
10.4s
0.62s
0.2562 ? 0.0083
0.8705 ? 0.0081
10.8s
0.63s
Results: Logistic regression and MLP underperformed compared to the four temporal learning
algorithms (Table 2). RETAIN is comparable to the other RNN variants in terms of prediction
performance while offering the interpretation benefit.
Note that RNN+?R model are a degenerated version of RETAIN with only scalar attention, which is
still a competitive model as shown in table 2. This confirms the efficiency of generating attention
weights using the RNN. However, RNN+?R model only provides scalar visit-level attention, which
is not sufficient for healthcare applications. Patients often receives several medical codes at a single
visit, and it will be important to distinguish their relative importance to the target. We show such a
case study in section 4.3.
Table 2 also shows the scalability of RETAIN, as its training time (the number of seconds to train
the model over the entire training set once) is comparable to RNN. The test time is the number
of seconds to generate the prediction output for the entire test set. We use the mini-batch of 100
patients when assessing both training and test times. RNN takes longer than RNN+?M because of its
two-layer structure, whereas RNN+?M uses a single layer RNN. The models that use two RNNs
(RNN, RNN+?R , RETAIN)2 take similar time to train for one epoch. However, each model required
a different number of epochs to converge. RNN typically takes approximately 10 epochs, RNN+?M
and RNN+?R 15 epochs and RETAIN 30 epochs. Lastly, training the attention models (RNN+?M ,
RNN+?R and RETAIN) for DPM would take considerably longer than L2D, because DPM modeling
generates context vectors at each time step. RNN, on the other hand, does not require additional
computation other than embedding the visit to its hidden layer to predict target labels at each time
step. Therefore, in DPM, the training time of the attention models will increase linearly in relation to
the length of the input sequence.
4.3 Model Interpretation for Heart Failure Prediction
We evaluated the interpretability of RETAIN in the HF prediction task by choosing a HF patient from
the test set and calculating the contribution of the variables (medical codes in this case) to diagnostic
prediction. The patient suffered from skin problems, skin disorder (SD), benign neoplasm (BN),
excision of skin lesion (ESL), for some time before showing symptoms of HF, cardiac dysrhythmia
(CD), heart valve disease (HVD) and coronary atherosclerosis (CA), and then a diagnosis of HF
(Figure 3). We can see that skin-related codes from the earlier visits made little contribution to HF
prediction as expected. RETAIN properly puts much attention to the HF-related codes that occurred in
recent visits.
To confirm RETAIN?s ability to exploit the sequence information of the EHR data, we reverse the visit
sequence of Figure 3a and feed it to RETAIN. Figure 3b shows the contribution of the medical codes
of the reversed visit record. HF-related codes in the past are still making positive contributions, but
not as much as they did in Figure 3a. Figure 3b also emphasizes RETAIN?s superiority to interpretable,
but stationary models such as logistic regression. Stationary models often aggregate past information
and remove the temporality from the input data, which can mistakenly lead to the same risk prediction
for Figure 3a and 3b. RETAIN, however, can correctly digest the sequence information and calculates
the HF risk score of 9.0%, which is significantly lower than that of Figure 3a.
Figure 3c shows how the contributions of codes change when selected medication data are used in
the model. We added two medications from day 219: antiarrhythmics (AA) and anticoagulants (AC),
both of which are used to treat cardiac dysrhythmia (CD). The two medications make a negative
contributions, especially towards the end of the record. The medications decreased the positive
contributions of heart valve disease and cardiac dysrhythmia in the last visit. Indeed, the HF risk
2
The RNN baseline uses two layers of RNN, RNN+?R uses one for visit embedding and one for generating
?, RETAIN uses each for generating ? and
7
Contribution
1.5
0 SD
-0.5
Contribution
1.5
CD
CA
0
SD
-0.5
Contribution
SD, ESL
SD, BN
SD, ESL
SD, ESL, BN
CD: Cardiac dysrhythmia
CA: Coronary atherosclerosis
HVD: Heart valve disorder
CD
CD
CD
SD
ESL
SD
HVD
CD CD
CA
Diagnosed
with HF
Time
SD
(b) HF risk: 0.0905
HVD
CD
1.5
SD: Skin disorder
ESL: Excision of skin lesion
BN: Benign neoplasm
AA: Antiarrhythmic medication
AC: Anticoagulant medication
(a) HF risk: 0.2474
CD
SD, ESL
CD
CD
SD
ESL, BN
SD
SD
ESL
SD SD
BN ESL SD
(c) HF risk: 0.2165
0 SD
SD, ESL
CD HVD
CA CD
SD, BN
SD, ESL
95
126
SD, ESL, BN
CD
AA, AC
SD
-0.5
0 (day)
57
Time
171
219
CD
CD
AA, AC AA, AC
SD AC AC
ESL AA AA
SD
294
328
Time
354
342 350
Figure 3: (a) Temporal visualization of a patient?s visit records where the contribution of variables for
diagnosis of heart failure (HF) is summarized along the x-axis (i.e. time) with the y-axis indicating
the magnitude of visit and code specific contributions to HF diagnosis. (b) We reverse the order of
the visit sequence to see if RETAIN can properly take into account the modified sequence information.
(c) Medication codes are added to the visit record to see how it changes the behavior of RETAIN.
prediction (0.2165) of Figure 3c is lower than that of Figure 3a (0.2474). This suggests that taking
proper medications can help the patient in reducing their HF risk.
5
Conclusion
Our approach to modeling event sequences as predictors of HF diagnosis suggest that complex
models can offer both superior predictive accuracy and more precise interpretability. Given the power
of RNNs for analyzing sequential data, we proposed RETAIN, which preserves RNN?s predictive
power while allowing a higher degree of interpretation. The key idea of RETAIN is to improve
the prediction accuracy through a sophisticated attention generation process, while keeping the
representation learning part simple for interpretation, making the entire algorithm accurate and
interpretable. RETAIN trains two RNN in a reverse time order to efficiently generate the appropriate
attention variables. For future work, we plan to develop an interactive visualization system for
RETAIN and evaluating RETAIN in other healthcare applications.
References
[1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. In ICLR, 2015.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In ICLR, 2015.
[3] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult.
Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[4] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of SciPy, 2010.
[5] A. D. Black, J. Car, C. Pagliari, C. Anandan, K. Cresswell, T. Bokun, B. McKinstry, R. Procter, A. Majeed,
and A. Sheikh. The impact of ehealth on the quality and safety of health care: a systematic overview. PLoS
Med, 8(1):e1000387, 2011.
[6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. Intelligible models for healthcare:
Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015.
[7] B. Chaudhry, J. Wang, S. Wu, M. Maglione, W. Mojica, E. Roth, S. C. Morton, and P. G. Shekelle.
Systematic review: impact of health information technology on quality, efficiency, and costs of medical
care. Annals of internal medicine, 144(10):742?752, 2006.
[8] Z. Che, S. Purushotham, R. Khemani, and Y. Liu. Distilling knowledge from deep networks with
applications to healthcare domain. arXiv preprint arXiv:1512.03542, 2015.
8
[9] K. Cho, B. Van Merri?nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014.
[10] E. Choi, M. T. Bahadori, E. Searles, C. Coffey, and J. Sun. Multi-layer representation learning for medical
concepts. In KDD, 2016.
[11] E. Choi, M. T. Bahadori, and J. Sun. Doctor ai: Predicting clinical events via recurrent neural networks.
arXiv preprint arXiv:1511.05942, 2015.
[12] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech
recognition. In NIPS, pages 577?585, 2015.
[13] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network.
University of Montreal, 2009.
[14] C. Esteban, O. Staeck, Y. Yang, and V. Tresp. Predicting clinical events by combining static and dynamic
information using recurrent neural networks. arXiv preprint arXiv:1602.02685, 2016.
[15] A. S. Fleisher, B. B. Sowell, C. Taylor, A. C. Gamst, R. C. Petersen, L. J. Thal, and f. t. A. D. C. Study.
Clinical predictors of progression to Alzheimer disease in amnestic mild cognitive impairment. Neurology,
68(19):1588?1595, May 2007.
[16] A. for Healthcare Research and Quality. Clinical classifications software (ccs) for icd-9-cm. https:
//www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed: 2016-04-01.
[17] A. for Healthcare Research and Quality. Clinical classifications software for services and procedures.
https://www.hcup-us.ahrq.gov/toolssoftware/ccs_svcsproc/ccssvcproc.jsp. Accessed:
2016-04-01.
[18] B. Gallego, S. R. Walter, R. O. Day, A. G. Dunn, V. Sivaraman, N. Shah, C. A. Longhurst, and E. Coiera.
Bringing cohort studies to the bedside: framework for a ?green button? to support clinical decision-making.
Journal of Comparative Effectiveness Research, pages 1?7, May 2015.
[19] J. Ghosh and V. Karamcheti. Sequence learning with recurrent networks: analysis of internal representations.
In Aerospace Sensing, pages 449?460. International Society for Optics and Photonics, 1992.
[20] C. L. Goldzweig, A. Towfigh, M. Maglione, and P. G. Shekelle. Costs and benefits of health information
technology: new trends from the literature. Health affairs, 28(2):w282?w293, 2009.
[21] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image
generation. arXiv preprint arXiv:1502.04623, 2015.
[22] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching
machines to read and comprehend. In NIPS, pages 1684?1692, 2015.
[23] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[24] W. K. C. D. Information. Medi-span electronic drug file (med-file) v2. http://www.wolterskluwercdi.
com/drug-data/medi-span-electronic-drug-file/. Accessed: 2016-04-01.
[25] A. K. Jha, C. M. DesRoches, E. G. Campbell, K. Donelan, S. R. Rao, T. G. Ferris, A. Shields, S. Rosenbaum,
and D. Blumenthal. Use of electronic health records in us hospitals. N Engl J Med, 2009.
[26] A. Karpathy, J. Johnson, and F.-F. Li. Visualizing and understanding recurrent networks. arXiv preprint
arXiv:1506.02078, 2015.
[27] A. N. Kho, M. G. Hayes, L. Rasmussen-Torvik, J. A. Pacheco, W. K. Thompson, L. L. Armstrong, J. C.
Denny, P. L. Peissig, A. W. Miller, W.-Q. Wei, S. J. Bielinski, C. G. Chute, C. L. Leibson, G. P. Jarvik,
D. R. Crosslin, C. S. Carlson, K. M. Newton, W. A. Wolf, R. L. Chisholm, and W. L. Lowe. Use of
diverse electronic medical record systems to identify genetic risk for type 2 diabetes within a genome-wide
association study. JAMIA, 19(2):212?218, Apr. 2012.
[28] Q. V. Le. Building high-level features using large scale unsupervised learning. In ICASSP, 2013.
[29] Q. V. Le, N. Jaitly, and G. E. Hinton. A simple way to initialize recurrent networks of rectified linear units.
arXiv preprint arXiv:1504.00941, 2015.
[30] Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzell. Learning to Diagnose with LSTM Recurrent Neural
Networks. In ICLR, 2016.
[31] A. F. Martins and R. F. Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label
classification. In ICML, 2016.
[32] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In NIPS, 2014.
[33] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence summarization.
In EMNLP, 2015.
[34] S. Saria, D. Koller, and A. Penn. Learning individual and population level traits from clinical temporal
data. In NIPS, Predictive Models in Personalized Medicine workshop, 2010.
[35] P. Schulam and S. Saria. A probabilistic graphical model for individualizing prognosis in chronic, complex
diseases. In AMIA, volume 2015, page 143, 2015.
[36] J. Sun, F. Wang, J. Hu, and S. Edabollahi. Supervised patient similarity measure of heterogeneous patient
records. ACM SIGKDD Explorations Newsletter, 14(1):16?24, 2012.
[37] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell:
Neural image caption generation with visual attention. In ICML, 2015.
[38] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
9
| 6321 |@word h:1 mild:1 version:1 hu:1 confirms:1 r:1 bn:8 liu:1 series:1 score:1 offering:1 genetic:1 past:6 outperforms:1 existing:1 medi:2 com:2 comparing:1 yet:1 dx:2 must:1 gpu:1 timestamps:3 benign:2 kdd:2 remove:1 drop:1 interpretable:13 v:1 alone:1 stationary:2 selected:3 affair:1 sutter:2 vanishing:1 short:1 record:15 lr:3 provides:3 pascanu:1 contribute:2 math:1 org:1 simpler:1 accessed:3 wierstra:1 along:2 c2:1 kho:1 consists:2 indeed:1 expected:1 behavior:5 sparsemax:2 kiros:1 multi:2 salakhutdinov:1 detects:1 unfolded:1 little:1 valve:3 equipped:1 window:1 increasing:1 cpu:1 spain:1 provided:1 moreover:2 notation:3 cm:1 developed:1 finding:3 ghosh:1 temporal:7 every:1 ti:2 blumenthal:1 interactive:1 demonstrates:1 rm:5 healthcare:8 control:5 medical:13 unit:2 superiority:1 penn:1 danihelka:1 positive:3 before:2 dropped:1 attend:2 treat:1 sd:26 safety:1 consequence:1 service:1 analyzing:1 amia:1 approximately:2 black:2 rnns:9 blunsom:1 dynamically:1 suggests:1 ease:1 limited:1 bi:3 adoption:1 practice:1 bootstrap:1 procedure:6 dunn:1 peissig:1 area:1 rnn:59 drug:3 significantly:2 word:8 suggest:1 petersen:1 selection:1 put:2 risk:11 applying:1 context:6 influence:2 www:3 demonstrated:1 roth:1 chronic:1 straightforward:1 attention:57 kale:1 thompson:1 simplicity:1 identifying:2 disorder:3 scipy:1 attending:1 rule:1 examines:1 lamblin:1 kay:1 embedding:10 stability:1 population:1 coordinate:3 merri:1 annals:1 target:5 construction:2 caption:1 distinguishing:1 us:6 jaitly:1 diabetes:1 elkan:1 element:4 adadelta:2 recognition:3 trend:1 preprint:7 wang:2 calculate:2 fleisher:1 ensures:1 sun:4 hinders:1 plo:1 trade:1 highest:1 disease:10 rq:1 govern:1 complexity:1 warde:1 dynamic:1 predictive:14 efficiency:2 completely:1 icassp:1 schwenk:1 represented:6 train:5 walter:2 describe:3 zemel:1 tell:1 aggregate:3 hyper:2 outcome:1 choosing:1 encoded:1 supplementary:1 valued:1 encoder:1 favor:1 statistic:1 gi:4 g1:1 ability:1 jointly:1 itself:1 superscript:1 sequence:27 rr:3 advantage:1 propose:2 sowell:1 denny:1 combining:1 date:4 translate:2 intuitive:1 normalize:1 scalability:2 assessing:1 jsp:2 comparative:3 generating:10 object:1 help:1 derive:1 recurrent:12 bedside:1 pose:1 informs:1 measured:1 ij:2 nearest:1 ac:7 develop:1 thal:1 eq:6 edward:1 recovering:1 implemented:2 predicted:1 rosenbaum:1 distilling:1 hermann:1 opened:1 exploration:1 require:1 espeholt:1 feeding:1 preliminary:1 sufficiently:1 koch:1 predict:10 desjardins:1 achieves:2 vary:1 readmission:1 omitted:1 esteban:1 label:10 tanh:1 sivaraman:1 largest:2 gehrke:1 successfully:2 tool:1 weighted:2 always:1 modified:2 grouper:2 pacheco:1 ej:3 hj:1 varying:1 gatech:2 derived:2 focus:3 morton:1 improvement:2 properly:2 indicates:1 likelihood:2 esl:14 contrast:1 medication:13 sigkdd:1 baseline:8 typically:2 entire:4 hidden:14 relation:3 her:1 koller:1 mimicking:1 overall:1 among:2 classification:5 retaining:1 plan:1 art:1 special:2 softmax:9 initialize:1 timestamp:2 once:1 frasconi:1 represents:1 broad:1 unsupervised:1 icml:2 mimic:2 future:2 report:2 inherent:1 preserve:2 individual:2 intended:1 consisting:1 mckinstry:1 attempt:1 mlp:9 possibility:1 mnih:2 multiply:1 evaluation:3 abstractive:1 photonics:1 farley:1 accurate:2 andy:1 tree:1 old:2 taylor:1 rush:1 column:1 modeling:8 xeon:1 earlier:1 rao:1 stewart:1 caruana:1 engl:1 phrase:1 cost:3 deviation:1 predictor:2 successful:2 johnson:1 dependency:1 considerably:1 cho:3 lstm:2 international:1 retain:50 systematic:3 physician:4 probabilistic:1 concrete:1 squared:1 central:1 emnlp:2 cognitive:1 inefficient:1 simard:1 li:1 chorowski:1 account:2 bergstra:1 summarized:2 coefficient:4 jha:1 schulam:1 vi:18 h1:2 view:1 diagnose:3 responsibility:1 analyze:1 lowe:1 doctor:3 portion:1 hf:19 competitive:1 compiler:1 contribution:19 minimize:1 publicly:1 accuracy:17 variance:1 efficiently:1 ensemble:3 yield:1 miller:1 identify:1 ybi:1 directional:1 conceptually:1 vincent:1 kavukcuoglu:1 emphasizes:1 notoriously:1 cc:4 rectified:1 whenever:1 failure:8 involved:1 e2:1 static:1 gain:1 dataset:7 knowledge:1 excision:2 dimensionality:2 car:1 cj:3 sophisticated:2 campbell:1 feed:2 higher:3 day:4 follow:1 methodology:1 supervised:1 wei:1 formulation:1 done:1 box:1 diagnosed:3 generality:1 evaluated:1 symptom:1 lastly:1 working:1 receives:1 hand:1 mistakenly:1 ei:1 sturm:1 logistic:7 quality:6 irnn:1 building:1 concept:1 true:2 regularization:1 read:1 visualizing:2 comprehend:1 during:1 recurrence:2 auc:3 unambiguous:1 noted:1 criterion:1 trying:1 mohammad:1 demonstrate:1 newsletter:1 interpreting:3 reasoning:1 image:3 wise:1 recently:1 common:2 sigmoid:1 superior:1 overview:2 individualizing:1 volume:1 million:2 association:1 interpretation:9 occurred:1 interpret:2 trait:1 bougares:1 significant:1 measurement:1 ai:1 tuning:1 ehr:16 teaching:1 language:7 stable:1 longer:2 similarity:1 gj:1 align:1 dominant:2 multivariate:1 imbalanced:1 recent:3 reverse:12 schmidhuber:1 nvidia:1 underlined:1 binary:4 yi:20 joshua:1 inverted:1 seen:2 preserving:1 neg:1 care:6 additional:1 anandan:1 aggregated:1 determine:2 period:2 converge:1 multiple:2 match:1 clinical:16 offer:3 cross:3 long:3 divided:1 e1:2 visit:58 calculates:1 prediction:27 variant:3 regression:6 impact:2 multilayer:1 patient:31 metric:1 heterogeneous:1 arxiv:14 represent:3 hochreiter:1 c1:1 receive:1 whereas:1 separately:1 addressed:2 decreased:1 diagram:1 source:3 suffered:1 suleyman:1 breuleux:1 rest:1 bringing:1 file:3 med:3 bahdanau:3 dpm:10 astudillo:1 effectiveness:2 alzheimer:1 delegate:1 chopra:1 leverage:1 yang:1 cohort:3 bengio:7 embeddings:5 xj:9 architecture:2 lasso:1 prognosis:1 reduce:1 idea:1 cn:1 tradeoff:1 icd:1 motivated:1 expression:1 utility:1 gb:1 speech:2 deep:2 impairment:1 heess:1 detailed:1 karpathy:1 clutter:1 montreal:1 documented:2 generate:13 http:4 wci:2 cuda:1 diagnostic:1 correctly:2 per:1 diagnosis:11 diverse:1 group:1 key:3 four:1 elhadad:1 backward:1 v1:10 ram:1 button:1 year:3 sum:3 run:1 electronic:7 wu:1 draw:1 decision:2 appendix:6 comparable:6 layer:19 hi:5 followed:1 distinguish:1 courville:2 optic:1 constraint:1 x2:1 software:2 personalized:1 lipton:1 generates:2 speed:1 span:2 nboer:1 martin:1 influential:3 developing:1 according:1 clinically:1 slightly:1 cardiac:4 sheikh:1 making:6 intuitively:1 theano:2 heart:11 computationally:1 visualization:4 count:1 mechanism:11 end:15 gulcehre:1 available:1 ferris:1 rewritten:2 gamst:1 progression:4 v2:1 appropriate:2 bahadori:4 occurrence:1 batch:2 encounter:2 shah:1 rp:2 original:4 denotes:4 remaining:1 include:1 running:2 completed:3 nlp:1 opportunity:1 top:1 newton:1 graphical:1 zeiler:1 medicine:3 calculating:2 exploit:1 carlson:1 gallego:1 especially:1 society:1 gregor:1 objective:1 skin:6 added:2 digest:1 strategy:2 primary:1 pneumonia:1 traditional:7 che:1 gradient:2 iclr:3 distance:1 reversed:3 lou:1 decoder:1 unstable:1 assuming:1 degenerated:1 code:23 length:2 modeled:1 relationship:1 index:6 mini:2 ratio:1 difficult:3 negative:3 ba:2 implementation:2 proper:1 summarization:1 allowing:1 observation:1 chute:1 descent:1 emulating:1 hinton:1 precise:1 jimeng:1 kocisky:1 gru:2 required:1 sentence:7 aerospace:1 barcelona:1 nip:5 address:1 deserve:1 adult:1 chaudhry:1 challenge:2 summarize:2 interpretability:9 max:2 green:1 belief:1 memory:1 power:2 event:4 natural:1 rely:2 difficulty:1 taha:1 predicting:5 atherosclerosis:2 improve:3 github:1 technology:3 identifies:1 axis:2 extract:1 health:12 tresp:1 review:2 epoch:6 literature:1 understanding:1 multiplication:1 relative:1 graf:2 loss:4 fully:1 hcup:2 generation:7 limitation:1 coronary:2 versus:1 validation:2 degree:1 sufficient:1 cd:18 translation:5 last:1 keeping:2 rasmussen:1 perceptron:1 institute:1 neighbor:1 wide:1 taking:1 sparse:1 benefit:2 van:1 curve:1 depth:1 vocabulary:1 dimension:2 calculated:1 evaluating:1 genome:1 made:2 avg:3 adaptive:1 erhan:1 cope:1 transaction:1 k80:1 confirm:2 hayes:1 assumed:1 tuples:1 xi:18 neurology:1 alternatively:1 continuous:1 latent:1 quantifies:1 why:1 table:6 additionally:1 underperformed:1 learn:3 robust:1 ca:5 ignoring:2 serdyuk:1 e5:1 complex:3 domain:1 vj:1 did:2 apr:1 linearly:1 intelligible:1 hyperparameters:2 profile:1 turian:1 tesla:1 facilitating:1 lesion:2 x1:14 xu:1 intel:1 roc:1 georgia:1 shield:1 sub:1 choi:3 embed:1 gov:2 specific:5 xt:6 bastien:1 showing:1 symbol:1 list:2 sensing:1 workshop:1 sequential:5 effectively:1 importance:1 ci:7 magnitude:1 occurring:1 sparser:1 entropy:3 likely:2 explore:1 visual:3 expressed:1 hvd:5 scalar:3 aa:7 corresponds:1 wolf:1 relies:1 extracted:1 acm:1 grefenstette:1 weston:1 goal:3 viewed:1 month:1 towards:1 content:1 considerable:1 change:7 saria:2 folded:1 determined:1 reducing:1 total:1 hospital:2 experimental:1 meaningful:1 indicating:1 formally:1 internal:2 support:1 armstrong:1 tested:2 |
5,883 | 6,322 | Exponential expressivity in deep neural networks
through transient chaos
Ben Poole1 , Subhaneil Lahiri1 , Maithra Raghu2 , Jascha Sohl-Dickstein2 , Surya Ganguli1
1
Stanford University, 2 Google Brain
{benpoole,sulahiri,sganguli}@stanford.edu, {maithra,jaschasd}@google.com
Abstract
We combine Riemannian geometry with the mean field theory of high dimensional
chaos to study the nature of signal propagation in generic, deep neural networks
with random weights. Our results reveal an order-to-chaos expressivity phase
transition, with networks in the chaotic phase computing nonlinear functions whose
global curvature grows exponentially with depth but not width. We prove this
generic class of deep random functions cannot be efficiently computed by any shallow network, going beyond prior work restricted to the analysis of single functions.
Moreover, we formalize and quantitatively demonstrate the long conjectured idea
that deep networks can disentangle highly curved manifolds in input space into flat
manifolds in hidden space. Our theoretical analysis of the expressive power of deep
networks broadly applies to arbitrary nonlinearities, and provides a quantitative
underpinning for previously abstract notions about the geometry of deep functions.
1
Introduction
Deep feedforward neural networks have achieved remarkable performance across many domains
[1?6]. A key factor thought to underlie their success is their high expressivity. This informal notion
has manifested itself primarily in two forms of intuition. The first is that deep networks can compactly
express highly complex functions over input space in a way that shallow networks with one hidden
layer and the same number of neurons cannot. The second piece of intuition, which has captured
the imagination of machine learning [7] and neuroscience [8] alike, is that deep neural networks can
disentangle highly curved manifolds in input space into flattened manifolds in hidden space. These
intuitions, while attractive, have been difficult to formalize mathematically and thus test rigorously.
For the first intuition, seminal works have exhibited examples of particular functions that can be
computed with a polynomial number of neurons (in the input dimension) in a deep network but
require an exponential number of neurons in a shallow network [9?13]. This raises a central open
question: are such functions merely rare curiosities, or is any function computed by a generic deep
network not efficiently computable by a shallow network? The theoretical techniques employed in
prior work both limited the applicability of theory to specific nonlinearities and dictated the particular
measure of deep functional complexity involved. For example, [9] focused on ReLU nonlinearities
and number of linear regions as a complexity measure, while [10] focused on sum-product networks
and the number of monomials as complexity measure, and [14] focused on Pfaffian nonlinearities and
topological measures of complexity, like the sum of Betti numbers of a decision boundary (however,
see [15] for an interesting analysis of a general class of compositional functions). The limits of
prior theoretical techniques raise another central question: is there a unifying theoretical framework
for deep neural expressivity that is simultaneously applicable to arbitrary nonlinearities, generic
networks, and a natural, general measure of functional complexity?
Code to reproduce all results available at: https://github.com/ganguli-lab/deepchaos
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Here we attack both central problems of deep neural expressivity by combining Riemannian geometry
[16] and dynamical mean field theory [17]. This novel combination of tools enables us to show that
for very broad classes of nonlinearities, even random deep neural networks can construct hidden
internal representations whose global extrinsic curvature grows exponentially with depth but not width.
Our geometric framework enables us to quantitatively define a notion of disentangling and verify
this notion in deep random networks. Furthermore, our methods yield insights into the emergent,
deterministic nature of signal propagation through large random feedforward networks, revealing the
existence of an order to chaos transition as a function of the statistics of weights and biases. We find
that the transient, finite depth evolution in the chaotic regime underlies the origins of exponential
expressivity in deep random networks. In a companion paper [18], we study several related measures
of expressivity in deep random neural networks with piecewise linear activations.
2
A mean field theory of deep nonlinear signal propagation
Consider a deep feedforward network with D layers of weights W1 , . . . , WD and D + 1 layers of
neural activity vectors x0 , . . . , xD , with Nl neurons in each layer l, so that xl ? RNl and Wl is an
Nl ? Nl?1 weight matrix. The feedforward dynamics elicited by an input x0 is given by
xl = ?(hl )
l
hl = Wl xl?1 + bl
for l = 1, . . . , D,
(1)
l
where b is a vector of biases, h is the pattern of inputs to neurons at layer l, and ? is a single
neuron scalar nonlinearity that acts component-wise to transform inputs hl to activities xl . We
wish to understand the nature of typical functions computable by such networks, as a consequence
of their depth. We therefore study ensembles of random networks in which each of the synaptic
l
2
weights Wij
are drawn i.i.d. from a zero mean Gaussian with variance ?w
/Nl?1 , while the biases
2
are drawn i.i.d. from a zero mean Gaussian with variance ?b . This weight scaling ensures that the
input contribution to each individual neuron at layer l from activities in layer l ? 1 remains O(1),
independent of the layer width Nl?1 . This ensemble constitutes a maximum entropy distribution over
deep neural networks, subject to constraints on the means and variances of weights and biases. This
ensemble induces no further structure in the resulting set of deep functions, so its analysis provides
an opportunity to understand the specific contribution of depth alone to the nature of typical functions
computed by deep networks.
In the limit of large layer widths, Nl 1, certain aspects of signal propagation through deep
random neural networks take on an essentially deterministic character. This emergent determinism
in large random neural networks enables us to understand how the Riemannian geometry of simple
manifolds in the input layer x0 is typically modified as the manifold propagates into the deep layers.
For example, consider the simplest case of a single input vector x0 . As it propagates through the
network, its length in downstream layers will change. We track this changing length by computing
the normalized squared length of the input vector at each layer:
ql =
Nl
1 X
(hl )2 .
Nl i=1 i
(2)
This length is the second moment of the empirical distribution of inputs hli across all Nl neurons
in layer
large Nl , this empirical distribution converges to a zero mean Gaussian since each
Pl. For
l
l
hli = j Wij
?(hl?1
j ) + bi is a weighted sum of a large number of uncorrelated random variables
l
- i.e. the weights Wij
and biases bli , which are independent of the activity in previous layers. By
propagating this Gaussian distribution across one layer, we obtain an iterative map for q l in (2):
Z
p
2
l
l?1
2
q = V(q
| ?w , ?b ) ? ?w
Dz ?
q l?1 z + ?b2 , for l = 2, . . . , D,
(3)
z2
2 0
where Dz = ?dz
e? 2 is the standard Gaussian measure, and the initial condition is q 1 = ?w
q + ?b2 ,
2?
where q 0 = N10 x0 ? x0 is the length in the initial activity layer. See Supplementary Material (SM)
for a derivation of (3). Intuitively, the integral over z in (3) replaces an average over the empirical
distribution of hli across neurons i in layer l at large layer width Nl .
The function V in (3) is an iterative variance, or length, map that predicts how the length of an input in
(2) changes as it propagates through the network. This length map is plotted in Fig. 1A for the special
2
Figure 1: Dynamics of the squared length q l for a sigmoidal network (?(h) = tanh(h)) with 1000
hidden units. (A) The iterative length map in (3) for 3 different ?w at ?b = 0.3. Theoretical
predictions (solid lines) match well with individual network simulations (dots). Stars reflect fixed
points q ? of the map. (B) The iterative dynamics of the length map yields rapid convergence of q l
to its fixed point q ? , independent of initial condition (lines=theory; dots=simulation). (C) q ? as a
function of ?w and ?b . (D) Number of iterations required to achieve ? 1% fractional deviation off
the fixed point. The (?b , ?w ) pairs in (A,B) are marked with color matched circles in (C,D).
case of a sigmoidal nonlinearity, ?(h) = tanh(h). For monotonic nonlinearities, this length map is
a monotonically increasing, concave function whose intersections with the unity line determine its
fixed points q ? (?w , ?b ). For ?b = 0 and ?w < 1, the only intersection is at q ? = 0. In this bias-free,
small weight regime, the network shrinks all inputs to the origin. For ?w > 1 and ?b = 0, the q ? = 0
fixed point becomes unstable and the length map acquires a second nonzero fixed point, which is
stable. In this bias-free, large weight regime, the network expands small inputs and contracts large
inputs. Also, for any nonzero bias ?b , the length map has a single stable non-zero fixed point. In such
a regime, even with small weights, the injected biases at each layer prevent signals from decaying to
0. The dynamics of the length map leads to rapid convergence of length to its fixed point with depth
(Fig. 1B,D), often within only 4 layers. The fixed points q ? (?w , ?b ) are shown in Fig. 1C.
3
Transient chaos in deep networks
Now consider the layer-wise propagation of two inputs x0,1 and x0,2 . The geometry of these two
inputs as they propagate through the network is captured by the 2 by 2 matrix of inner products:
l
qab
Nl
1 X
=
hl (x0,a ) hli (x0,b )
Nl i=1 i
a, b ? {1, 2}.
(4)
The dynamics of the two diagonal terms are each theoretically predicted by the length map in (3). We
l
derive (see SM) a correlation map C that predicts the layer-wise dynamics of q12
:
Z
l?1
l?1
l
2
q12
= C(cl?1
Dz1 Dz2 ? (u1 ) ? (u2 ) + ?b2 ,
(5)
12 , q11 , q22 | ?w , ?b ) ? ?w
q
q
q
l?1
l?1
2
u1 = q11
z1 ,
u2 = q22
cl?1
1 ? (cl?1
12 z1 +
12 ) z2 ,
l
l l ?1/2
where cl12 = q12
(q11
q22 )
is the correlation coefficient. Here z1 and z2 are independent standard
Gaussian variables, while u1 and u2 are correlated Gaussian variables with covariance matrix
l?1
hua ub i = qab
. Together, (3) and (5) constitute a theoretical prediction for the typical evolution of
the geometry of 2 points in (4) in a fixed large network.
Analysis of these equations reveals an interesting order to chaos transition in the ?w and ?b plane. In
particular, what happens to two nearby points as they propagate through the layers? Their relation to
each other can be tracked by the correlation coefficient cl12 between the two points, which approaches
a fixed point c? (?w , ?b ) at large depth. Since the length of each point rapidly converges to q ? (?w , ?b ),
l
l
= q ? (?w , ?b ) in (5) and
as shown in Fig. 1BD, we can compute c? by simply setting q11
= q22
?
dividing by q to obtain an iterative correlation coefficient map, or C-map, for cl12 :
cl12 =
1
?
?
C(cl?1
12 , q , q | ?w , ?b ).
q?
3
(6)
This C-map is shown in Fig. 2A. It always has a fixed point at c? = 1 as can be checked by direct
calculation. However, the stability of this fixed point depends on the slope of the map at 1, which is
Z
? ? 2
?cl12
2
= ?w
Dz ?0
?1 ? l?1
q z
.
(7)
?c12 c=1
See SM for a derivation of (7). If the slope ?1 is less than 1, then the C-map is above the unity line,
the fixed point at 1 under the C-map in (6) is stable, and nearby points become more similar over time.
Figure 2: Dynamics of correlations, cl12 , in a sigmoidal network with ?(h) = tanh(h). (A) The
C-map in (6) for the same ?w and ?b = 0.3 as in Fig. 1A. (B) The C-map dynamics, derived from
both theory, through (6) (solid lines) and numerical simulations of (1) with Nl = 1000 (dots) (C)
Fixed points c? of the C-map. (D) The slope of the C-map at 1, ?1 , partitions the space (black dotted
line at ?1 = 1) into chaotic (?1 > 1, c? < 1) and ordered (?1 < 1, c? = 1) regions.
Conversely, if ?1 > 1 then this fixed point is unstable, and nearby points separate as they propagate
through the layers. Thus we can intuitively understand ?1 as a multiplicative stretch factor. This
l?1
l 0
intuition can be made precise by considering the Jacobian Jlij = Wij
? (hl?1
with
j ) at a point hj
?
l
length q . J is a linear approximation of the network map from layer l ? 1 to l in the vicinity of hl?1 .
Therefore a small random perturbation hl?1 + u will map to hl + Ju. The growth of the perturbation,
||Ju||22 /||u||22 becomes ?1 (q ? ) after averaging over the random perturbation u, weight matrix Wl ,
and Gaussian distribution of hl?1
across i. Thus ?1 directly reflects the typical multiplicative growth
i
or shrinkage of a random perturbation across one layer.
The dynamics of the iterative C-map and its agreement with network simulations is shown in Fig.
2B. The correlation dynamics are much slower than the length dynamics because the C-map is closer
to the unity line (Fig. 2A) than the length map (Fig. 1A). Thus correlations typically take about 20
layers to approach the fixed point, while lengths need only 4. The fixed point c? and slope ?1 of
the C-map are shown in Fig. 2CD. For any fixed, finite ?b , as ?w increases three qualitative regions
occur. For small ?w , c? = 1 is the only fixed point, and it is stable because ?1 < 1. In this strong
bias regime, any two input points converge to each other as they propagate through the network. As
?w increases, ?1 increases and crosses 1, destabilizing the c? = 1 fixed point. In this intermediate
regime, a new stable fixed point c? appears, which decreases as ?w increases. Here an equal footing
competition between weights and nonlinearities (which de-correlate inputs) and the biases (which
correlate them), leads to a finite c? . At larger ?w , the strong weights overwhelm the biases and
maximally de-correlate inputs to make them orthogonal, leading to a stable fixed point at c? = 0.
Thus the equation ?1 (?w , ?b ) = 1 yields a phase transition boundary in the (?w , ?b ) plane, separating
it into a chaotic (or ordered) phase, in which nearby points separate (or converge). In dynamical
systems theory, the logarithm of ?1 is related to the well known Lyapunov exponent which is positive
(or negative) for chaotic (or ordered) dynamics. However, in a feedforward network, the dynamics is
truncated at a finite depth D, and hence the dynamics are a form of transient chaos.
4
The propagation of manifold geometry through deep networks
Now consider a 1 dimensional manifold x0 (?) in input space, where ? is an intrinsic scalar coordinate
on the manifold. This manifold propagates to a new manifold hl (?) = hl (x0 (?)) in the vector
space of inputs to layer l. The typical geometry of the manifold in the l?th layer is summarized
by q l (?1 , ?2 ), which for any ?1 and ?2 is defined by (4) with the choice x0,a = x0 (?1 ) and x0,b =
4
x0 (?2 ). The theory for the propagation of pairs of points applies to all pairs of points on the
manifold, so intuitively, we expect that in the chaotic phase of a sigmoidal network, the manifold
should in some sense de-correlate, and become more complex, while in the ordered phase the
manifold should contract around a central point. This theoretical prediction of equations (3) and
(5) is quantitatively
in simulationsin Fig. 3, when the input is a simple manifold, the
? confirmed
circle, h1 (?) = N1 q u0 cos(?) + u1 sin(?) , where u0 and u1 form an orthonormal basis for a 2
dimensional subspace of RN1 in which the circle lives. The scaling is chosen so that each neuron has
input activity O(1). Also, for simplicity, we choose the fixed point radius q = q ? in Fig. 3.
Figure 3: Propagating a circle through three random sigmoidal networks with varying ?w and fixed
?b = 0.3. (A) Projection of hidden inputs of simulated networks at layer 5 and 10 onto their first
three principal components. Insets show the fraction of variance explained by the first 5 singular
values. For large weights (bottom), the distribution of singular
R values gets flatter and the projected
curve is more tangled. (B) The autocorrelation, cl12 (??) = d? q l (?, ? + ??)/q ? , of hidden inputs
as a function of layer for simulated networks. (C) The theoretical predictions from (6) (solid lines)
compared to the average (dots) and standard deviation across ? (shaded) in a simulated network.
To quantitatively understand the layer-wise growth of complexity of this manifold, it is useful to turn
to concepts in Riemannian geometry [16]. First, at each point ?, the manifold h(?) (we temporarily
suppress the layer index l) has a tangent, or velocity vector v(?) = ?? h(?). Intuitively, curvature
is related to how quickly this tangent vector rotates in the ambient space RN as one moves along
the manifold, or in essence the acceleration vector a(?) = ?? v(?). Now at each point ?, when both
are nonzero, v(?) and a(?) span a 2 dimensional subspace of RN . Within this subspace, there is a
unique circle of radius R(?) that has the same position, velocity and acceleration vector as the curve
h(?) at ?. This circle is known as the osculating circle (Fig. 4A), and the extrinsic curvature ?(?) of
the curve is defined as ?(?) = 1/R(?). Thus, intuitively, small radii of curvature R(?) imply high
extrinsic curvature ?(?). The extrinsic curvature of a curve depends only on its image in RN and
is invariant with respect to the particular parameterizationp? ? h(?). For any parameterization, an
explicit expression for ?(?) is given by ?(?) = (v ? v)?3/2 (v ? v)(a ? a) ? (v ? a)2 [16]. Note that
under a unit speed parameterization of the curve, so that v(?) ? v(?) = 1, we have v(?) ? a(?) = 0,
and ?(?) is simply the norm of the acceleration vector.
Another measure of the curve?s complexity is the length LE of its image in the ambient Euclidean
space. The Euclidean metric in RN induces a metric g E (?) = v(?) ? v(?) on the curve,
p so that the
E
N
E
distance dL moved in R as p
one moves from ? to ? + d? on the curve is dL = g E (?)d?. The
R
E
total curve length is L =
g E (?)d?. However, even straight line segments can have a large
Euclidean length. Another interesting measure of length that takes into account curvature, is the
length of the image of the curve under the Gauss map. For a K dimensional manifold M embedded in
5
Figure 4: Propagation of extrinsic curvature and length in a network with 1000 hidden units. (A)
An osculating circle. (B) A curve with unit tangent vectors at 4 points in ambient space, and
the image of these points under the Gauss map. (C-E) Propagation of curvature metrics based
on both theory derived from iterative maps in (3), (6) and (8) (solid lines) and simulations using
(1) (dots). (F) Schematic of the normal vector, tangent plane, and principal curvatures for a 2D
manifold embedded in R3 . (G) average principal curvatures for the largest and smallest 4 principal
curvatures (??1 , . . . , ??4 ) across locations ? within one network. The principal curvatures all grow
exponentially as we backpropagate to the input layer. Panels F,G are discussed in Sec. 5.
RN , the Gauss map (Fig. 4B) maps a point ? ? M to its K dimensional tangent plane T? M ? GK,N ,
where GK,N is the Grassmannian manifold of all K dimensional subspaces in RN . In the special case
of K = 1, GK,N is the sphere SN ?1 with antipodal points identified, since a 1-dimensional subspace
can be identified with a unit vector, modulo sign.pThe Gauss map takes a point ? on the curve and
? (?) = v(?)/ v(?) ? v(?). In particular, the natural metric on
maps it to the unit velocity vector v
? (?)) ? (?? v
? (?)), which measures
SN ?1 induces a Gauss metric on the curve, given by g G (?) = (?? v
G
? (?) changes as ? changes. Thus the distance
how quickly the unit tangent vector v
p dL moved in
G
G
the Grassmannian GK,N as one moves from ? to ? + p
d? on the curve is dL = g (?)d?, and the
R
length of the curve under the Gauss map is LG =
g G (?)d?. Furthermore, the Gauss metric is
related to the extrinsic curvature and the Euclidean metric via the relation g G (?) = ?(?)2 g E (?) [16].
To illustrate these concepts,
compute all of them for the circle h1 (?) defined above:
? it is useful to ?
E
E
g (?) = N q, L = 2? N q, ?(?) = 1/ N q,?g G (?) = 1, and LG = 2?. As expected, ?(?) is
the inverse of the radius of curvature, which is N q. Now consider how these quantities change
if the circle is scaled up so that h(?) ? ?h(?). The length LE and radius scale up by ?, but the
curvature ? scales down as ??1 , and so LG does not change. Thus linear expansion increases length
and decreases curvature, thereby maintaining constant Grassmannian length LG .
We now show that nonlinear propagation of this same circle through a deep network can behave very
differently from linear expansion: in the chaotic regime, length can increase without any decrease
in extrinsic curvature! To remove
with N in the above quantities, we will work with the
? the scaling
1 E
E
renormalized quantities ?
? = N ?, g? = N g , and L?E = ?1N LE . Thus, 1/(?
?)2 can be thought
of as a radius of curvature squared per neuron of the osculating circle, while (L?E )2 is the squared
Euclidean length of the curve per neuron. For the circle, these quantities are q and 2?q respectively.
For simplicity, in the inputs to the first layer of neurons, we begin with a circle h1 (?) with squared
radius per neuron q 1 = q ? , so this radius is already at the fixed point of the length map in (3). In the
SM, we derive an iterative formula for the extrinsic curvature and Euclidean metric of this manifold
as it propagates through the layers of a deep network:
g?E,l = ?1 g?E,l?1
(?
?l )2 = 3
1 l?1 2
?2
+
(?
? ) ,
2
?1
?1
g?E,1 = q ? ,
where ?1 is the stretch factor defined in (7) and ?2 is defined analogously as
Z
? ? 2
2
?2 = ?w
Dz ?00
q z
.
6
(?
?1 )2 = 1/q ? .
(8)
(9)
?2 is closely related to the second derivative of the C-map in (6) at cl?1
12 = 1; this second derivative is
?2 q ? . See SM for a derivation of the evolution equations for extrinsic geometry in (8).
Intriguingly for a sigmoidal neural network, these evolution equations behave very differently in
the chaotic (?1 > 1) versus ordered (?1 < 1) phase. In the chaotic phase, the Euclidean metric g?E
grows exponentially with depth due to multiplicative stretching through ?1 . This stretching does
multiplicatively attenuate any curvature in layer l ? 1 by a factor 1/?1 (see the update equation for
?
? l in (8)), but new curvature is added in due to a nonzero ?2 , which originates from the curvature of
the single neuron nonlinearity in (9). Thus, unlike in linear expansion, extrinsic curvature is not lost,
but maintained, and ultimately approaches a fixed point ?
? ? . This implies that the global curvature
measure L?G grows exponentially with depth. These highly nontrivial predictions of the metric and
curvature evolution equations in (8) are quantitatively confirmed in simulations in Figure 4C-E.
Intuitively, this exponential growth of global curvature L?G in the chaotic phase implies that the curve
explores many different tangent directions in hidden representation space. This further implies that
the coordinate functions of the embedding hli (?) become highly complex curved basis functions
on the input manifold coordinate ?, allowing a deep network to compute exponentially complex
functions over simple low dimensional manifolds (Figure 5A-C, details in SM).
Figure 5: Deep networks in the chaotic regime are more expressive than shallow networks. (A)
Activity of four different neurons in the output layer as a function of the input, ? for three networks
of different depth (width Nl = 1, 000). (B) Linear regression of the output activity onto a random
function (black) shows closer predictions (blue) with deeper networks (bottom) than shallow networks
(top). (C) Decomposing the prediction error by frequency shows shallow networks cannot capture high
frequency content in random functions but deep networks can (yellow=high error). (D) Increasing
the width of a one hidden layer network up to 10, 000 does not decrease error at high frequencies.
5
Shallow networks cannot achieve exponential expressivity
Consider a shallow network with 1 hidden layer x1 , one input layer x0 , with x1 = ?(W1 x0 ) + b1 ,
and a linear readout layer. How complex can the hidden representation be as a function of its width
N1 , relative to the results above for depth? We prove a general upper bound on LE (see SM):
Theorem 1. Suppose ?(h) is monotonically non-decreasing with bounded dynamic range R, i.e.
maxh ?(h) ? minh ?(h) = R. Further suppose that x0 (?) is a curve in input space such that no 1D
projection of ?? x(?) changes sign more than s times over the range of ?. Then for any choice of W1
and b1 the Euclidean length of x1 (?), satisfies LE ? N1 (1 + s)R.
For the circle input,
? s = 1 and for the tanh nonlinearity, R = 2, so in this special case, the normalized
length L?E ? 2 N1 . In contrast, for deep networks in the chaotic regime L?E grows exponentially
with depth in h space, and so consequently also in x space. Therefore the length of curves typically
expand exponentially in depth even for random deep networks, but can only expand as the square
root of width no matter what shallow network is chosen. Moreover, as we have seen above, it is the
exponential growth of L?E that fundamentally drives the exponential growth of L?G with depth. Indeed
shallow random networks exhibit minimal growth in expressivity even at large widths (Figure 5D).
6
Classification boundaries acquire exponential local curvature with depth
We have focused so far on how simple manifolds in input space can acquire both exponential
Euclidean and Grassmannian length with depth, thereby exponentially de-correlating and filling up
7
hidden representation space. Another natural question is how the complexity of a decision boundary
grows as it is backpropagated to the input layer. Consider a linear classifier y = sgn(? ? xD ? ?0 )
acting on the final layer. In this layer, the N ? 1 dimensional decision boundary is the hyperplane
? ?xD ??0 = 0. However, in the input layer x0 , the decision boundary is a curved N ?1 dimensional
manifold M that arises as the solution set of the nonlinear equation G(x0 ) ? ? ? xD (x0 ) ? ?0 = 0,
where xD (x0 ) is the nonlinear feedforward map from input to output.
~ is perpendicular to the N ? 1
At any point x? on the decision boundary in layer l, the gradient ?G
~
dimensional tangent plane Tx? M (see Fig. 4F). The normal vector ?G,
along with any unit tangent
? ? Tx? M, spans a 2 dimensional subspace whose intersection with M yields a geodesic
vector v
? . This geodesic will have extrinsic curvature
curve in M passing through x? with velocity vector v
? ). Maximizing this curvature over v
? yields the first principal curvature ?1 (x? ). A sequence
?(x? , v
? ), while constraining v
? to be perpendicular to all previous
of successive maximizations of ?(x? , v
solutions, yields the sequence of principal curvatures ?1 (x? ) ? ?2 (x? ) ? ? ? ? ? ?N ?1 (x? ). These
principal curvatures arise as the eigenvalues of a normalized Hessian operator projected onto the
?1
?2G
~
b b T
tangent plane Tx? M: H = ||?G||
2 P ?x?xT P, where P = I ? ?G?G is the projection operator
b is the unit normal vector [16]. Intuitively, near x? , the decision boundary M
onto Tx? M and ?G
can be approximated as a paraboloid with a quadratic form H whose N ? 1 eigenvalues are the
principal curvatures ?1 , . . . , ?N ?1 (Fig. 4F).
We compute these curvatures numerically as a function of depth in Fig. 4G (see SM for details).
We find, remarkably, that a subset of principal curvatures grow exponentially with depth. Here
the principal curvatures are signed, with positive (negative) curvature indicating that the associated
~
geodesic curves towards (away from) the normal vector ?G.
Thus the decision boundary can
become exponentially curved with depth, enabling highly complex classifications. Moreover, this
exponentially curved boundary is disentangled and mapped to a flat boundary in the output layer.
7
Discussion
Fundamentally, neural networks compute nonlinear maps between high dimensional spaces, for
example from RN1 ? RND , and it is unclear what the most appropriate mathematics is for understanding such daunting spaces of maps. Previous works have attacked this problem by restricting
the nature of the nonlinearity involved (e.g. piecewise linear, sum-product, or Pfaffian) and thereby
restricting the space of maps to those amenable to special theoretical analysis methods (combinatorics,
polynomial relations, or topological invariants). We have begun a preliminary exploration of the
expressivity of such deep functions based on Riemannian geometry and dynamical mean field theory.
We demonstrate that networks in a chaotic phase compactly exhibit functions that exponentially grow
the global curvature of simple one dimensional manifolds from input to output and the local curvature
of simple co-dimension one manifolds from output to input. The former captures the notion that deep
neural networks can efficiently compute highly expressive functions in ways that shallow networks
cannot, while the latter quantifies and demonstrates the power of deep neural networks to disentangle
curved input manifolds, an attractive idea that has eluded formal quantification.
Moreover, our analysis of a maximum entropy distribution over deep networks constitutes an important null model of deep signal propagation that can be used to assess and understand different
behavior in trained networks. For example, the metrics we have adapted from Riemannian geometry,
combined with an understanding of their behavior in random networks, may provide a basis for
understanding what is special about trained networks. Furthermore, while we have focused on the
notion of input-output chaos, the duality between inputs and synaptic weights imply a form of weight
chaos, in which deep neural networks rapidly traverse function space as weights change (see SM).
Indeed, just as autocorrelation lengths between outputs as a function of inputs shrink exponentially
with depth, so too will autocorrelations between outputs as a function of weights. Finally, while our
length and correlation maps can be applied directly to piecewise linear nonlinearities (e.g. ReLUs),
deep piecewise linear functions have 0 local curvature. To characterize how such functions twist
across input space, our methods can compute tangent vector auto-correlations instead of curvature.
But more generally, to understand functions, we often look to their graphs. The graph of a map from
RN1 ? RND is an RN1 dimensional submanifold of RN1 +ND , and therefore has both high dimension
and co-dimension. We speculate that many of the secrets of deep learning may be uncovered by
studying the geometry of this graph as a Riemannian manifold, and understanding how it changes
with both depth and learning.
8
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[3] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan
Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up
end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
[4] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[5] Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J
Guibas, and Jascha Sohl-Dickstein. Deep knowledge tracing. In Advances in Neural Information
Processing Systems, pages 505?513, 2015.
[6] Lane T. McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen A.
Baccus. Deep learning models of the retinal response to natural scenes. In Advances in Neural
Information Processing Systems, 2016.
[7] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and
new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):
1798?1828, 2013.
[8] James J DiCarlo and David D Cox. Untangling invariant object recognition. Trends in cognitive
sciences, 11(8):333?341, 2007.
[9] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of
linear regions of deep neural networks. In Advances in neural information processing systems,
pages 2924?2932, 2014.
[10] Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in
Neural Information Processing Systems, pages 666?674, 2011.
[11] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. arXiv
preprint arXiv:1512.03965, 2015.
[12] Matus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint
arXiv:1509.08101, 2015.
[13] James Martens, Arkadev Chattopadhya, Toni Pitassi, and Richard Zemel. On the representational
efficiency of restricted boltzmann machines. In Advances in Neural Information Processing
Systems, pages 2877?2885, 2013.
[14] Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A
comparison between shallow and deep architectures. Neural Networks and Learning Systems,
IEEE Transactions on, 25(8):1553?1565, 2014.
[15] Hrushikesh Mhaskar, Qianli Liao, and Tomaso Poggio. Learning real and boolean functions:
When is deep better than shallow. arXiv preprint arXiv:1603.00988, 2016.
[16] John M Lee. Riemannian manifolds: an introduction to curvature, volume 176. Springer
Science & Business Media, 2006.
[17] Haim Sompolinsky, A Crisanti, and HJ Sommers. Chaos in random neural networks. Physical
Review Letters, 61(3):259, 1988.
[18] Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the
expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.
9
| 6322 |@word cox:1 polynomial:2 norm:1 nd:1 open:1 dz1:1 simulation:7 propagate:4 covariance:1 thereby:3 solid:4 moment:1 initial:3 uncovered:1 com:2 wd:1 z2:3 activation:1 bd:1 john:1 numerical:1 partition:1 enables:3 remove:1 update:1 v:1 alone:1 intelligence:1 parameterization:2 plane:6 footing:1 provides:2 pascanu:1 location:1 successive:1 attack:1 sigmoidal:6 traverse:1 along:2 direct:1 become:4 rnl:1 qualitative:1 prove:2 combine:1 autocorrelation:2 shubho:1 theoretically:1 x0:23 secret:1 indeed:2 expected:1 tomaso:1 rapid:2 behavior:2 brain:1 antipodal:1 decreasing:1 considering:1 increasing:2 becomes:2 spain:1 begin:1 moreover:4 matched:1 panel:1 bounded:1 medium:1 null:1 what:4 quantitative:1 act:1 concave:1 xd:5 expands:1 growth:7 scaled:1 classifier:2 demonstrates:1 control:1 unit:9 underlie:1 originates:1 positive:2 local:3 limit:2 consequence:1 black:2 signed:1 conversely:1 shaded:1 co:3 catanzaro:1 limited:1 bi:1 range:2 perpendicular:2 unique:1 lecun:1 lost:1 chaotic:13 razvan:1 riedmiller:1 empirical:3 thought:2 revealing:1 destabilizing:1 projection:3 get:1 cannot:5 onto:4 operator:2 seminal:1 bellemare:1 tangled:1 deterministic:2 map:44 dz:5 maximizing:1 marten:1 focused:5 simplicity:2 jascha:3 insight:1 orthonormal:1 disentangled:1 stability:1 embedding:1 notion:6 coordinate:3 shamir:1 suppose:2 modulo:1 guido:1 olivier:1 carl:1 origin:2 agreement:1 velocity:4 trend:1 approximated:1 recognition:2 predicts:2 bottom:2 preprint:5 capture:2 region:4 ensures:1 readout:1 montufar:1 sompolinsky:1 decrease:4 intuition:5 complexity:9 rigorously:1 dynamic:15 renormalized:1 ultimately:1 geodesic:3 trained:2 raise:2 segment:1 efficiency:1 basis:3 untangling:1 compactly:2 emergent:2 differently:2 maheswaranathan:1 tx:4 derivation:3 prenger:1 niru:1 zemel:1 whose:5 stanford:2 supplementary:1 larger:1 delalleau:1 statistic:1 transform:1 itself:1 final:1 sequence:2 eigenvalue:2 product:4 combining:1 rapidly:2 pthe:1 achieve:2 representational:1 moved:2 competition:1 sutskever:1 convergence:2 silver:1 converges:2 ben:2 adam:1 object:1 derive:2 illustrate:1 telgarsky:1 propagating:2 strong:2 dividing:1 predicted:1 q12:3 implies:3 lyapunov:1 direction:1 radius:8 closely:1 exploration:1 human:1 sgn:1 transient:4 material:1 require:1 preliminary:1 ryan:1 mathematically:1 awni:1 pl:1 stretch:2 underpinning:1 around:1 normal:4 guibas:1 matus:1 smallest:1 applicable:1 tanh:4 largest:1 wl:3 tool:1 weighted:1 reflects:1 gaussian:8 always:1 modified:1 hj:2 shrinkage:1 rusu:1 jaschasd:1 varying:1 derived:2 maithra:3 sganguli:1 contrast:1 sense:1 ganguli:4 typically:3 hidden:13 relation:3 expand:2 reproduce:1 wij:4 going:1 classification:3 pascal:1 exponent:1 sengupta:1 special:5 field:4 construct:1 equal:1 intriguingly:1 veness:1 koray:1 broad:1 look:1 constitutes:2 filling:1 jon:1 yoshua:4 quantitatively:5 piecewise:4 primarily:1 richard:1 fundamentally:2 simultaneously:1 individual:2 scarselli:1 geometry:13 phase:10 n1:4 ostrovski:1 paraboloid:1 highly:7 mnih:1 joel:1 nl:15 amenable:1 ambient:3 integral:1 closer:2 poggio:1 ohad:1 orthogonal:1 euclidean:9 logarithm:1 circle:15 plotted:1 theoretical:9 minimal:1 elsen:1 boolean:1 maximization:1 applicability:1 deviation:2 subset:1 rare:1 monomials:1 crisanti:1 submanifold:1 krizhevsky:1 too:1 characterize:1 combined:1 cho:1 ju:2 explores:1 contract:2 off:1 lee:1 together:1 quickly:2 ilya:1 analogously:1 monica:1 sanjeev:1 w1:3 squared:5 q11:4 central:4 rn1:5 reflect:1 choose:1 huang:1 cognitive:1 imagination:1 leading:1 derivative:2 account:1 volodymyr:1 nonlinearities:9 de:4 retinal:1 star:1 b2:3 c12:1 summarized:1 coefficient:3 flatter:1 sec:1 matter:1 combinatorics:1 speculate:1 depends:2 leonidas:1 piece:1 multiplicative:3 h1:3 root:1 lab:1 decaying:1 relus:1 elicited:1 slope:4 contribution:2 ass:1 square:1 greg:1 convolutional:1 variance:5 stretching:2 efficiently:3 ensemble:3 yield:6 ronen:1 yellow:1 vincent:1 kavukcuoglu:1 hli:5 confirmed:2 drive:1 straight:1 n10:1 synaptic:2 checked:1 frequency:3 involved:2 james:2 associated:1 riemannian:8 begun:1 color:1 fractional:1 knowledge:1 formalize:2 appears:1 response:1 maximally:1 daunting:1 shrink:2 furthermore:3 just:1 correlation:9 expressive:4 nonlinear:6 propagation:11 google:2 autocorrelations:1 reveal:1 grows:6 verify:1 normalized:3 concept:2 evolution:5 vicinity:1 hence:1 former:1 kyunghyun:1 nonzero:4 attractive:2 sin:1 width:10 acquires:1 essence:1 maintained:1 demonstrate:2 image:4 wise:4 chaos:10 novel:1 functional:2 physical:1 tracked:1 twist:1 exponentially:14 volume:1 discussed:1 numerically:1 attenuate:1 mcintosh:1 erich:1 mathematics:1 nonlinearity:5 sommers:1 dot:5 stable:6 maxh:1 pitassi:1 curvature:44 disentangle:3 dictated:1 perspective:1 conjectured:1 certain:1 manifested:1 success:1 life:1 captured:2 seen:1 employed:1 determine:1 converge:2 monotonically:2 signal:6 u0:2 stephen:1 mhaskar:1 match:1 calculation:1 cross:1 long:1 sphere:1 schematic:1 prediction:7 underlies:1 regression:1 liao:1 essentially:1 metric:11 mehran:1 arxiv:10 iteration:1 achieved:1 remarkably:1 singular:2 grow:3 qab:2 bassen:1 unlike:1 exhibited:1 subject:1 jlij:1 near:1 feedforward:8 intermediate:1 constraining:1 bengio:4 relu:1 architecture:1 identified:2 inner:1 idea:2 andreas:1 computable:2 expression:1 speech:2 passing:1 hessian:1 compositional:1 bli:1 constitute:1 deep:54 useful:2 generally:1 backpropagated:1 induces:3 simplest:1 http:1 coates:1 dotted:1 sign:2 neuroscience:1 extrinsic:11 track:1 per:3 bryan:1 blue:1 broadly:1 georg:1 express:1 dickstein:2 key:1 four:1 drawn:2 changing:1 prevent:1 graph:3 merely:1 downstream:1 sum:5 fraction:1 inverse:1 letter:1 injected:1 yann:1 decision:7 scaling:4 layer:49 bound:1 piech:1 haim:1 courville:1 topological:2 replaces:1 quadratic:1 activity:8 nontrivial:1 adapted:1 occur:1 constraint:1 alex:2 scene:1 flat:2 lane:1 nearby:4 kleinberg:1 aspect:1 u1:5 speed:1 span:2 franco:1 martin:1 combination:1 across:9 dz2:1 character:1 unity:3 shallow:15 alike:1 happens:1 hl:13 intuitively:7 restricted:2 explained:1 invariant:3 equation:8 previously:1 remains:1 overwhelm:1 turn:1 r3:1 hannun:1 sahami:1 jared:1 end:2 raghu:1 informal:1 studying:1 available:1 decomposing:1 away:1 generic:4 appropriate:1 slower:1 existence:1 top:1 pfaffian:2 opportunity:1 maintaining:1 unifying:1 aran:1 bl:1 move:3 question:3 quantity:4 already:1 added:1 nayebi:1 diagonal:1 unclear:1 exhibit:2 gradient:1 subspace:6 distance:2 separate:2 rotates:1 separating:1 simulated:3 grassmannian:4 mapped:1 fidjeland:1 chris:1 manifold:32 unstable:2 code:1 length:41 index:1 dicarlo:1 multiplicatively:1 acquire:2 difficult:1 disentangling:1 ql:1 lg:4 baccus:1 gk:4 negative:2 suppress:1 satheesh:1 boltzmann:1 allowing:1 upper:1 neuron:16 sm:9 finite:4 minh:1 enabling:1 curved:7 behave:2 truncated:1 attacked:1 hinton:2 precise:1 rn:6 perturbation:4 arbitrary:2 david:2 pair:3 required:1 z1:3 imagenet:1 eluded:1 expressivity:10 barcelona:1 nip:1 beyond:1 curiosity:1 dynamical:3 pattern:2 poole:1 regime:9 power:4 natural:4 quantification:1 business:1 github:1 imply:2 auto:1 sn:2 prior:3 geometric:1 understanding:4 tangent:11 review:2 relative:1 graf:1 embedded:2 expect:1 interesting:3 versus:1 remarkable:1 geoffrey:2 propagates:5 uncorrelated:1 cd:1 casper:1 eldan:1 free:2 bias:12 formal:1 understand:7 deeper:1 determinism:1 tracing:1 q22:4 benefit:1 boundary:11 depth:23 dimension:4 transition:4 curve:21 made:1 reinforcement:1 projected:2 far:1 correlate:4 transaction:2 global:5 correlating:1 reveals:1 b1:2 surya:4 iterative:8 quantifies:1 betti:1 nature:7 expansion:3 complex:6 cl:5 domain:1 marc:1 qianli:1 arise:1 toni:1 x1:3 fig:17 andrei:1 position:1 wish:1 explicit:1 exponential:9 xl:4 jacobian:1 companion:1 down:1 formula:1 theorem:1 specific:2 xt:1 inset:1 dl:4 intrinsic:1 restricting:2 sohl:3 flattened:1 diamos:1 backpropagate:1 entropy:2 intersection:3 simply:2 arkadev:1 ordered:5 temporarily:1 scalar:2 u2:3 applies:2 monotonic:1 hua:1 rnd:2 springer:1 satisfies:1 marked:1 acceleration:3 consequently:1 towards:1 content:1 change:9 typical:5 averaging:1 acting:1 hyperplane:1 principal:11 total:1 duality:1 gauss:7 indicating:1 aaron:1 internal:1 latter:1 arises:1 jonathan:2 ub:1 correlated:1 |
5,884 | 6,323 | Learnable Visual Markers
Oleg Grinchuk1 , Vadim Lebedev1,2 , and Victor Lempitsky1
1
Skolkovo Institute of Science and Technology, Moscow, Russia
2
Yandex, Moscow, Russia
Abstract
We propose a new approach to designing visual markers (analogous to QR-codes,
markers for augmented reality, and robotic fiducial tags) based on the advances
in deep generative networks. In our approach, the markers are obtained as color
images synthesized by a deep network from input bit strings, whereas another
deep network is trained to recover the bit strings back from the photos of these
markers. The two networks are trained simultaneously in a joint backpropagation
process that takes characteristic photometric and geometric distortions associated
with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network
can be inserted into the learning in order to shift the marker appearance towards
some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough
to be practical. The ability to automatically adapt markers according to the usage
scenario and the desired capacity as well as the ability to combine information
encoding with artistic stylization are the unique properties of our approach. As
a byproduct, our approach provides an insight on the structure of patterns that
are most suitable for recognition by ConvNets and on their ability to distinguish
composite patterns.
1
Introduction
Visual markers (also known as visual fiducials or visual codes) are used to facilitate humanenvironment and robot-environment interaction, and to aid computer vision in resource-constrained
and/or accuracy-critical scenarios. Examples of such markers include simple 1D (linear) bar
codes [31] and their 2D (matrix) counterparts such as QR-codes [9] or Aztec codes [18], which
are used to embed chunks of information into objects and scenes. In robotics, AprilTags [23] and
similar methods [3, 4, 26] are a popular way to make locations, objects, and agents easily identifiable for robots. Within the realm of augmented reality (AR), ARCodes [6] and similar marker
systems [13, 21] are used to enable real-time camera pose estimation with high accuracy, low latency, and on low-end devices. Overall, such markers can embed information into the environment
in a more compact and language-independent way as compared to traditional human text signatures,
and they can also be recognized and used by autonomous and human-operated devices in a robust
way.
Existing visual markers are designed ?manually? based on the considerations of the ease of processing by computer vision algorithms, the information capacity, and, less frequently, aesthetics.
Once marker family is designed, a computer vision-based approach (a marker recognizer) has to be
engineered and tuned in order to achieve reliable marker localization and interpretation [1, 17, 25].
The two processes of the visual marker design on one hand and the marker recognizer design on the
other hand are thus separated into two subsequent steps, and we argue that such separation makes
the corresponding design choices inherently suboptimal. In particular, the third aspect (aesthetics)
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
is usually overlooked, which leads to visually-intrusive markers that in many circumstances might
not fit the style of a certain environment and make this environment ?computer-friendly? at the cost
of ?human-friendliness?.
In this work, we propose a new general approach to constructing use visual markers that leverages
recent advances in deep generative learning. To this end, we suggest to embed the two tasks of the
visual marker design and the marker recognizer design into a single end-to-end learning framework.
Within our approach, the learning process produces markers and marker recognizers that are adapted
to each other ?by design?. While our idea is more general, we investigate the case where the markers
are synthesized by a deep neural network (the synthesizer network), and when they are recognized
by another deep network (the recognizer network). In this case, we demonstrate how these two
networks can be both learned by a joint stochastic optimization process.
The benefits of the new approach are thus several-fold:
1. As we demonstrate, the learning process can take into account the adversarial effects that
complicate recognition of the markers, such as perspective distortion, confusion with background, low-resolution, motion blur, etc. All such effects can be modeled at training time
as piecewise-differentiable transforms. In this way they can be embedded into the learning
process that will adapt the synthesizer and the recognizer to be robust with respect to such
effect.
2. It is easy to control the trade-offs between the complexity of the recognizer network, the
information capacity of the codes, and the robustness of the recognition towards different
adversarial effects. In particular, one can set the recognizer to have a certain architecture,
fix the variability and the strength of the adversarial effects that need to be handled, and
then the synthesizer will adapt so that the most ?legible? codes for such circumstances can
be computed.
3. Last but not least, the aesthetics of the neural codes can be brought into the optimization.
Towards this end we show that we can augment the learning objective with a special stylization loss inspired by [7, 8, 29]. Including such loss facilitates the emergence of stylized
neural markers that look as instances of a designer-provided stochastic texture. While such
modification of the learning process can reduce the information capacity of the markers, it
can greatly increase the ?human-friendliness? of the resulting markers.
Below, we introduce our approach and then briefly discuss the relation of this approach to prior art.
We then demonstrate several examples of learned marker families.
2
Learnable visual markers
We now detail our approach (Figure 1). Our goal is to build a synthesizer network S(b; ?S ) with
learnable parameters ?S that can encode a bit sequence b = {b1 , b2 , . . . bn } containing n bits into
an image M of the size m-by-m (a marker). For notational simplicity in further derivations, we
assume that bi ? {?1, +1}.
To recognize the markers produced by the synthesizer, a recognizer network R(I; ?R ) with learnable
parameters ?R is created. The recognizer takes an image I containing a marker and infers the realvalued sequence r = {r1 , r2 , . . . , rn }. The recognizer is paired to the synthesizer to ensure that
sign ri = bi , i.e. that the signs of the numbers inferred by the recognizers correspond to the bits
encoded by the synthesizer. In particular, we can measure the success of the recognition using a
simple loss function based on element-wise sigmoid:
n
L(b, r) = ?
n
1X
1X
1
?(bi ri ) = ?
n i=1
n i=1 1 + exp(?bi ri )
(1)
where the loss is distributed between ?1 (perfect recognition) and 0.
In real life, the recognizer network does not get to work with the direct outputs of the synthesizer.
Instead, the markers produced by the synthesizer network are somehow embedded into an environment (e.g. via printing or using electronic displays), and later their images are captioned by some
camera controlled by a human or by a robot. During learning, we model the transformation between
2
input bit string
decoding
loss
pretrained
ConvNet
synthesizer network
texture
sample
rendering
network
backpropagation
Gram
matrix
recognizer network
decoded input
texture
loss
Gram
matrix
Figure 1: The outline of our approach and the joint learning process. Our core architecture consists of the synthesizer network that converts input bit sequences into visual markers, the rendering
network that simulates photometric and geometric distortions associated with marker printing and
capturing, and the recognizer network that is designed to recover the input bit sequence from the
distorted markers. The whole architecture is trained end-to-end by backpropagation, after which
the synthesizer network can be used to generate markers, and the recognizer network to recover the
information from the markers placed in the environment. Additionally, we can enforce the visual
similarity of markers to a given texture sample using the mismatch in deep Gram matrix statistics in
a pretrained network [7] as the second loss term during learning (right part).
a marker produced by the synthesizer and the image of that marker using a special feed-forward network (the renderer network) T (M ; ?), where the parameters of the renderer network ? are sampled
during learning and correspond to background variability, lighting variability, perspective slant, blur
kernel, color shift/white balance of the camera, etc. In some scenarios, the non-learnable parameters ? can be called nuisance parameters, although in others we might be interested in recovering
some of them (e.g. the perspective transform parameters). During learning ? is sampled from some
distribution ? which should model the variability of the above-mentioned effects in the conditions
under which the markers are meant to be used.
When our only objective is robust marker recognition, the learning process can be framed as the
minimization of the following functional:
f (?S , ?R ) = E b?U (n) L b, R T S(b; ?S ); ? ; ?R
.
(2)
???
Here, the bit sequences b are sampled uniformly from U (n) = {?1; +1}n , passed through the
synthesizer, the renderer, and the recognizer, with the (minus) loss (1) being used to measure the
success of the recognition. The parameters of the synthesizer and the recognizer are thus optimized
to maximize the success rate.
The minimization of (2) can then be accomplished using a stochastic gradient descent algorithm,
e.g. ADAM [14]. Each iteration of the algorithm samples a mini-batch of different bit sequences as
well as different rendering layer parameter sets and updates the parameters of the synthesizer and
the recognizer networks in order to minimize the loss (1) for these samples.
Practical implementation. As mentioned above, the components of the architecture, namely the
synthesizer, the renderer, and the recognizer can be implemented as feed-forward networks. The
recognizer network can be implemented as a feedforward convolutional network [16] with n output
units. The synthesizer can use multiplicative and up-convolutional [5, 34] layers, as well as elementwise non-linearities.
Implementing the renderer T (M ; ?) (Figure 2) requires non-standard layers. We have implemented
the renderer as a chain of layers, each introducing some ?nuisance? transformation. We have implemented a special layer that superimposes an input over a bigger background patch drawn from
a random pool of images. We use the spatial transformer layer [11] to implement the geometric
distortion in a differentiable manner. Color shifts and intensity changes can be implemented using differentiable elementwise transformations (linear, multiplicative, gamma). Blurring associated
with lens effect or motion can be simply implemented using a convolutional layer. The nuisance
transformation layers can be chained resulting in a renderer layer that can model complex geometric
and photometric transformations (Figure 2).
3
Marker
Superimpose
Spatial Transform
Color Transform
Blur
Figure 2: Visualizations of the rendering network T (M ; ?). For the input marker M on the left the
output of the network is obtained through several stages (which are all piecewise-differentiable w.r.t.
inputs); on the right the outputs T (M ; ?) for several random nuisance parameters ? are shown. The
use of piecewise-differentiable transforms within T allows to backpropagate through T .
Controlling the visual appearance. Interestingly, we observed that under variable conditions, the
optimization of (2) results in markers that have a consistent and interesting visual texture (Figure 3).
Despite such style consistency, it might be desirable to control the appearance of the resulting markers more explicitly e.g. using some artistic prototypes. Recently, [7] have achieved remarkable
results in texture generation by measuring the statistics of textures using Gram matrices of convolutional maps inside deep convolutional networks trained to classify natural images. Texture synthesis
can then be achieved by minimizing the deviation between such statistics of generated images and
of style prototypes. Based on their approach, [12, 29] have suggested to include such deviation as
a loss into the training process for deep feedforward generative neural networks. In particular, the
feed-forward networks in [29] are trained to convert noise vectors into textures.
We follow this line of work and augment our learning objective (2) with the texture loss of [7]. Thus,
we consider a feed-forward network C(M ; ?) that computes the result of the t-th convolutional
layers of a network trained for large-scale natural image classification such as the VGGNet [28].
For an image M , the output C(M ; ?) thus contains k 2D channels (maps). The network C uses
the parameters ? that are pre-trained on a large-scale dataset and that are not part of our learning
process. The style of an image M is then defined using the following k-by-k Gram matrix G(M ; ?)
with each element defined as:
Gij (M ; ?) = h Ci (M ; ?), Cj (M ; ?) i ,
(3)
where Ci and Cj are the i-th and the j-th maps and the inner product is taken over all spatial locations.
Given a prototype texture M 0 , the learning objective can be augmented with the term:
2
fstyle (?S ) = E b?U (n)
G(S(b; ?S ); ?) ? G(M 0 ; ?)
.
(4)
The incorporation of the term (4) forces the markers S(b; ?S ) produced by the synthesizer to have
the visual appearance similar to instances of the texture defined by the prototype M0 [7].
3
Related Work
We now discuss the classes of deep learning methods that to the best of our understanding are most
related to our approach.
Our work is partially motivated by the recent approaches that analyze and visualize pretrained deep
networks by synthesizing color images evoking certain responses in these networks. Towards this
end [27] generate examples that maximize probabilities of certain classes according to the network,
[33] generate visual illusions that maximize such probabilities while retaining similarity to a predefined image of a potentially different class, [22] also investigate ways of generating highly-abstract
and structured color images that maximize probabilities of a certain class. Finally, [20] synthesize
color images that evoke a predefined vector of responses at a certain level of the network for the
purpose of network inversion. Our approach is related to these approaches, since our markers can be
regarded as stimuli invoking certain responses in the recognizer network. Unlike these approaches,
our recognizer network is not kept fixed but is updated together with the synthesizer network that
generates the marker images.
Another obvious connection are autoencoders [2], which are models trained to (1) encode inputs into
a compact intermediate representation through the encoder network and (2) recover the original input
4
64 bits, default params, C=59.9, p=99.3%
96 bits, low affine, C=90.2, p=99.3%
64 bits, low affine ? = 0.05, C=61.2, p=99.5%
8 bits, high blur, C=7.91, p=99.9%
32 bits, grayscale, C=27.9, p=98.3%
64 bits, nonlinear encoder, C=58.4, p=98.9%
64 bits, thin network, C=40.1, p=93.2%
64 bits, 16 pixel marker, C=56.8, p=98.5%
Figure 3: Visualization of the markers learned by our approach under different circumstances shown
in captions (see text for details). The captions also show the bit length, the capacity of the resulting encoding (in bits), as well as the accuracy achieved during training. In each case we show six
markers: (1) ? the marker corresponding to a bit sequence consisting of ?1, (2) ? the marker corresponding to a bit sequence consisting of +1, (3) and (4) ? markers for two random bit sequences that
differ by a single bit, (5) and (6) ? two markers corresponding to two more random bit sequences.
Under many conditions a characteristic grid pattern emerges.
by passing the compact representation through the decoder network. Our system can be regarded
as a special kind of autoencoder with the certain format of the intermediate representation (a color
image). Our decoder is trained to be robust to certain class of transformations of the intermediate
representations that are modeled by the rendering network. In this respect, our approach is related
to variational autoencoders [15] that are trained with stochastic intermediate representations and to
denoising autoencoders [30] that are trained to be robust to noise.
Finally, our approach for creating textured markers can be related to steganography [24], which aims
at hiding a signal in a carrier image. Unlike steganography, we do not aim to conceal information,
but just to minimize its ?intrusiveness?, while keeping the information machine-readable in the
presence of distortions associated with printing and scanning.
4
Experiments
Below, we present qualitative and quantitative evaluation of our approach. For longer bit sequences,
the approach might not be able to train a perfect pair of a synthesizer and a recognizer, and therefore,
similarly to other visual marker systems, it makes sense to use error-correcting encoding of the
signal. Since the recognizer network returns the odds for each bit in the recovered signal, our
approach is suitable for any probabilistic error-correction coding [19].
Synthesizer architectures. For the experiments without texture loss, we use the simplest synthesizer network, which consists of a single linear layer (with a 3m2 ? n matrix and a bias vector)
that is followed by an element-wise sigmoid. For the experiments with texture loss, we started with
the synthesizer used in [29], but found out that it can be greatly simplified for our task. Our final
architecture takes a binary code as input, transforms it with single fully connected layer and series
of 3 ? 3 convolutions with 2? upsamplings in between.
Recognizer architectures. Unless reported otherwise, the recognizer network was implemented as
a ConvNet with three convolutional layers (96 5 ? 5 filters followed by max-pooling and ReLU),
and two fully-connected layer with 192 and n output units respectively (where n is the length of
the code). We find this architecture sufficient to successfully deal with marker encoding. In some
experiments we have also considered a much smaller networks with 24 maps in convolutional layers,
and 48 units in the penultimate layer (?thin network?). In general, the convergence on the training
stage greatly benefits from adding Batch Normalization [10] after every convolutional layer. During
5
prototype
all ?1
all +1
half
random
random +
1 bit diff.
Figure 4: Examples of textured 64-bit marker families. The texture protototype is shown in the first
column, while five remaining columns show markers for the following sequences: all ?1, all +1,
32 consecutive ?1 followed by 32 ?1, and, finally, two random bit sequences that differ by a single
bit.
our experiments with texture loss, we used VGGNet-like architecture with 3 blocks, each consisting
of two 3 ? 3 convolutions and maxpooling, followed by two dense layers.
Rendering settings. We perform a spatial transform as an affine transformation, where the 6 affine
parameters are sampled from [1, 0, 0, 0, 1, 0]+N (0, ?) (assuming origin at the center of the marker).
The example for ? = 0.1 is shown in Fig. 2. We leave more complex spatial transforms (e.g. thin
plate spline [11]) that can make markers more robust to bending for future work. Some resilience to
bending can still be observed in our qualitative results.
Given an image x, we implement the color transformation layer as c1 xc2 + c3 , where the parameters
are sampled from the uniform distribution U [??, ?]. As we find that printed markers tend to reduce
the color contrast, we add a contrast reduction layer that transforms each value to kx + (1 ? k)[0.5]
for a random k.
Quantitative measurements. To quantify the performance of our markers under different circumstances, we report the accuracy p to which our system converges during the learning under different
settings (to evaluate accuracy, we threshold recognizer predictions at zero). Whenever we vary the
signal length n, we also report the capacity of the code, which is defined as C = n(1?H(p)), where
H(p) = ?p log p ? (1 ? p) log(1 ? p) is the coding entropy. Unless specified otherwise, we use
the rendering network settings visualized in Figure 2, which gives the impression of the variability
and the difficulty of the recovery problem, as the recognizer network is applied to the outputs of this
rendering network.
Experiments without texture loss. The bulk of experiments without the texture loss has been
performed with m = 32 i.e. 32 ? 32 patches (we used bilinear interpolation when printing or visualizing). The learned marker families with the base architectures as well as with its variations are
shown in Figure 3. It is curious to see the emergence of lattice structures (even though our synthesizer network in this case was a simple single-layer multiplicative network). Apparently, such
6
64/64
126/128
32/32
64/64
64/64
63/64
124/128
32/32
64/64
64/64
62/64
122/128
31/32
56/64
59/64
59/64
115/128
31/32
60/64
56/64
Figure 5: Screenshots of marker recognition process (black box is a part of the user interface and
corresponds to perfect alignment). The captions are in (number of correctly recovered bits/total
sequence length) format. The rightmost two columns correspond to stylized markers. These marker
families were trained with spatial variances ? = 0.1, 0.05, 0.1, 0.05, 0.05 respectively. Larger ?
leads to code recovery robustness with respect to affine transformation.
lattices are most efficient in terms of storing information for later recovery with a ConvNet. It can
also be seen how the system can adapt the markers to varying bit lengths or to varying robustness demands (e.g. to increasing blur or geometric distortions). We have further plotted how the quantitative
performance depends on the bit length and and on the marker size in Figure 6.
Experiments with texture loss. An interesting effect we have encountered while training synthesizer with texture loss and small output marker size is that it often ended up producing very similar
patterns. We tried to tweak architecture to handle this problem but eventually found out that it goes
away for larger markers.
Performance of real markers. We also show some qualitative results that include printing (on a
laser printer using various backgrounds) and capturing (with a webcam) of the markers. Characteristic results in Figure 4 demonstrate that our system can successfully recover encoded signals with
small amount of mistakes. The amount of mistakes can be further reduced by applying the system
with jitter and averaging the odds (not implemented here).
Here, we aid the system by roughly aligning the marker with a pre-defined square (shown as part
of the user interface). As can be seen the degradation of the results with the increasing alignment
error is graceful (due to the use of affine transforms inside the rendering network at train time). In
a more advanced system, such alignment can be bypassed altogether, using a pipeline that detects
marker instances in a video stream and localizes their corners. Here, one can either use existing
quad detection algorithms as in [23] or make the localization process a deep feed-forward network
and include it into the joint learning in our system. In the latter case, the synthesizer would adapt to
produce markers that are distinguishable from backgrounds and have easily identifiable corners. In
7
100
Accuracy, %
Accuracy, %
100
99
98
97
less affine
default
thin network
0
50
100
150
Number of bits
95
90
200
0
20
40
Marker size, pixels
60
Figure 6: Left ? dependence of the recognition accuracy on the size of the bit string for two variants
with the default networks, and one with the reduced number of maps in each convolutional layer.
Reducing the capacity of the network hurts the performance a lot, while reducing spatial variation in
the rendering network (to ? = 0.05) increases the capacity very considerably. Right ? dependence
of the recognition accuracy on the marker size (with otherwise default settings). The capacity of the
coding quickly saturates as markers grow bigger.
such qualitative experiments (Figure 4), we observe the error rates that are roughly comparable with
our quantitative experiments.
Recognizer networks for QR-codes. We have also experimented with replacing the synthesizer
network with a standard QR-encoder. While we tried different settings (such as error-correction
level, input bit sequence representation), the highest recognition rate we could achieve with our
architecture of the recognizer network was only 85%. Apparently, the recognizer network cannot
reverse the combination of error-correction encoding and rendering transformations well. We also
tried to replace both the synthesizer and the recognizer with a QR-encoder and a QR-decoder. Here
we found that standard QR-decoders cannot decode QR-markers processed by our renderer network
at the typical level of blur in our experiments (though special-purpose blind deblurring algorithms
such as [32] are likely to succeed).
5
Discussion
In this work, we have proposed a new approach to marker design, where marker design and their recognizer are learned jointly. Additionally, an aesthetics-related term can be added into the objective.
To the best of our knowledge, we are the first to approach visual marker design using optimization.
One curious side aspect of our work is the fact that the learned markers can provide an insight into
the architecture of ConvNets (or whatever architecture is used in the recognizer network). In more
details, they represent patterns that are most suitable for recognition with ConvNets. Unlike other
approaches that e.g. visualize patterns for networks trained to classify natural images, our method
decouples geometric and topological factors on one hand from the natural image statistics on the
other, as we obtain these markers in a ?content-free? manner1 .
As discussed above, one further extension to the system might be including marker localizer into
the learning as another deep feedforward network. We note that in some scenarios (e.g. generating
augmented reality tags for real-time camera localization), one can train the recognizer to estimate
the parameters of the geometric transformation in addition or even instead of the recovering the
input bit string. This would allow to create visual markers particularly suitable for accurate pose
estimation.
1
The only exception are the background images used by the rendering layer. In our experience, their statistics have negligible influence on the emerging patterns.
8
References
[1] L. F. Belussi and N. S. Hirata. Fast component-based qr code detection in arbitrarily acquired images.
Journal of mathematical imaging and vision, 45(3):277?292, 2013.
[2] Y. Bengio. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2(1):1?127,
2009.
[3] F. Bergamasco, A. Albarelli, and A. Torsello. Pi-tag: a fast image-space marker design based on projective
invariants. Machine vision and applications, 24(6):1295?1310, 2013.
[4] D. Claus and A. W. Fitzgibbon. Reliable fiducial detection in natural scenes. Computer Vision-ECCV
2004, pp. 469?480. Springer, 2004.
[5] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
[6] M. Fiala. ARTag, a fiducial marker system using digital techniques. Conf. Computer Vision and Pattern
Recognition (CVPR), v. 2, pp. 590?596, 2005.
[7] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. Advances
in Neural Information Processing Systems, NIPS, pp. 262?270, 2015.
[8] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition,CVPR, 2016.
[9] M. Hara, M. Watabe, T. Nojiri, T. Nagaya, and Y. Uchiyama. Optically readable two-dimensional code
and method and apparatus using the same, 1998. US Patent 5,726,435.
[10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. Proc. International Conference on Machine Learning, ICML, pp. 448?456, 2015.
[11] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. Advances in Neural
Information Processing Systems, pp. 2008?2016, 2015.
[12] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution.
European Conference on Computer Vision (ECCV), pp. 694?711, 2016.
[13] M. Kaltenbrunner and R. Bencina. Reactivision: a computer-vision framework for table-based tangible
interaction. Proc. of the 1st international conf. on tangible and embedded interaction, pp. 69?74, 2007.
[14] D. P. Kingma and J. B. Adam. A method for stochastic optimization. International Conference on
Learning Representation, 2015.
[15] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning
Representations, 2014.
[16] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541?551, 1989.
[17] C.-C. Lo and C. A. Chang. Neural networks for bar code positioning in automated material handling.
Industrial Automation and Control: Emerging Technologies, pp. 485?491. IEEE, 1995.
[18] A. Longacre Jr and R. Hussey. Two dimensional data encoding structure and symbology for use with
optical readers, 1997. US Patent 5,591,956.
[19] D. J. MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
[20] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. Conf.
Computer Vision and Pattern Recognition (CVPR), 2015.
[21] J. Mooser, S. You, and U. Neumann. Tricodes: A barcode-like fiducial design for augmented reality
media. IEEE Multimedia and Expo, pp. 1301?1304, 2006.
[22] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions
for unrecognizable images. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
[23] E. Olson. Apriltag: A robust and flexible visual fiducial system. Robotics and Automation (ICRA), 2011
IEEE International Conference on, pp. 3400?3407. IEEE, 2011.
[24] F. A. Petitcolas, R. J. Anderson, and M. G. Kuhn. Information hiding-a survey. Proceedings of the IEEE,
87(7):1062?1078, 1999.
[25] A. Richardson and E. Olson. Learning convolutional filters for interest point detection. Conf. on Robotics
and Automation (ICRA), pp. 631?637, 2013.
[26] D. Scharstein and A. J. Briggs. Real-time recognition of self-similar landmarks. Image and Vision
Computing, 19(11):763?772, 2001.
[27] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image
classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[29] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of
textures and stylized images. Int. Conf. on Machine Learning (ICML), 2016.
[30] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with
denoising autoencoders. Int. Conf. on Machine learning (ICML), 2008.
[31] N. J. Woodland and S. Bernard. Classifying apparatus and method, 1952. US Patent 2,612,994.
[32] S. Yahyanejad and J. Str?om. Removing motion blur from barcode images. 2010 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition-Workshops, pp. 41?46. IEEE, 2010.
[33] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. Computer vision?
ECCV 2014, pp. 818?833. Springer, 2014.
[34] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level
feature learning. Int. Conf. on Computer Vision (ICCV), pp. 2018?2025, 2011.
9
| 6323 |@word briefly:1 inversion:1 printer:1 tried:3 bn:1 invoking:1 minus:1 reduction:1 contains:1 series:1 optically:1 tuned:1 interestingly:1 deconvolutional:1 rightmost:1 existing:2 recovered:2 activation:1 synthesizer:29 subsequent:1 blur:7 designed:3 update:1 generative:3 half:1 device:2 core:1 provides:1 location:2 five:1 mathematical:1 direct:1 qualitative:4 consists:2 combine:1 inside:3 manner:1 introduce:1 acquired:1 roughly:2 frequently:1 gatys:2 inspired:1 detects:1 steganography:2 automatically:1 quad:1 str:1 increasing:2 hiding:2 spain:1 provided:1 linearity:1 medium:1 kind:1 string:6 emerging:2 transformation:11 ended:1 quantitative:4 every:1 friendly:1 alahi:1 decouples:1 control:3 unit:3 whatever:1 evoking:1 producing:1 carrier:1 negligible:1 resilience:1 apparatus:2 mistake:2 ulyanov:1 despite:1 encoding:7 bilinear:1 interpolation:1 might:5 black:1 ease:1 projective:1 bi:4 practical:2 unique:1 camera:4 lecun:1 block:1 implement:2 backpropagation:4 illusion:1 fitzgibbon:1 hara:1 composite:1 printed:1 vedaldi:3 pre:2 confidence:1 suggest:1 get:1 cannot:2 transformer:2 applying:1 influence:1 map:6 ecker:2 center:1 go:1 survey:1 resolution:2 simplicity:1 recovery:3 correcting:1 m2:1 insight:2 regarded:2 handle:1 tangible:2 autonomous:1 variation:2 analogous:1 updated:1 hurt:1 controlling:1 user:2 caption:3 decode:1 us:1 designing:1 deblurring:1 origin:1 element:3 synthesize:1 recognition:21 particularly:1 trend:1 observed:2 inserted:1 preprint:2 visualising:1 connected:2 trade:1 highest:1 mentioned:2 environment:6 complexity:1 chained:1 signature:1 trained:13 localization:3 blurring:1 textured:2 easily:3 joint:4 stylized:3 various:1 derivation:1 train:3 separated:1 laser:1 fast:2 encoded:2 larger:2 cvpr:5 distortion:6 otherwise:3 encoder:4 ability:3 statistic:6 simonyan:3 richardson:1 emergence:2 transform:4 jointly:1 final:1 sequence:15 differentiable:5 propose:2 interaction:3 product:1 achieve:2 olson:2 qr:9 stylization:3 convergence:1 r1:1 neumann:1 produce:2 generating:2 perfect:3 adam:2 leave:1 object:2 converges:1 unrecognizable:1 pose:2 recovering:2 implemented:8 larochelle:1 quantify:1 differ:2 kuhn:1 screenshots:1 filter:2 stochastic:5 hirata:1 human:5 engineered:1 enable:1 material:1 implementing:1 fix:1 extension:1 correction:3 considered:1 visually:1 exp:1 visualize:2 m0:1 vary:1 consecutive:1 purpose:2 recognizer:34 estimation:2 proc:2 jackel:1 hubbard:1 create:1 successfully:2 minimization:2 offs:1 brought:1 aim:2 super:1 varying:2 encode:2 clune:1 superimposes:1 notational:1 fooled:1 greatly:3 contrast:2 adversarial:3 industrial:1 sense:1 inference:1 relation:1 captioned:1 interested:1 pixel:2 overall:1 classification:3 flexible:1 augment:2 retaining:2 constrained:1 special:5 art:1 spatial:8 brox:1 mackay:1 once:1 manually:1 look:1 icml:3 thin:4 photometric:3 future:1 others:1 stimulus:1 piecewise:3 spline:1 report:2 dosovitskiy:1 simultaneously:1 recognize:1 gamma:1 consisting:3 detection:4 interest:1 investigate:2 highly:1 evaluation:1 alignment:3 henderson:1 operated:1 chain:1 predefined:2 accurate:1 capable:1 byproduct:1 experience:1 unless:2 taylor:1 desired:1 plotted:1 legible:1 instance:3 classify:2 column:3 ar:1 measuring:1 lattice:2 artistic:3 cost:1 introducing:1 deviation:2 tweak:1 uniform:1 fabrication:1 johnson:1 reported:1 scanning:2 params:1 considerably:1 chunk:1 st:1 international:5 probabilistic:1 decoding:1 pool:1 bethge:2 synthesis:3 quickly:1 together:1 lebedev:1 containing:2 russia:2 corner:2 creating:1 conf:9 style:6 return:1 szegedy:1 account:2 fiala:1 b2:1 coding:3 automation:3 int:3 explicitly:1 blind:1 yandex:1 depends:1 later:2 multiplicative:3 performed:1 stream:1 lot:1 analyze:1 apparently:2 recover:5 bayes:1 minimize:2 square:1 om:1 accuracy:9 convolutional:16 variance:1 characteristic:3 correspond:3 saliency:1 handwritten:1 vincent:1 produced:4 lighting:1 whenever:1 complicate:1 pp:14 obvious:1 associated:4 sampled:5 dataset:1 popular:1 color:10 realm:1 infers:1 emerges:1 cj:2 knowledge:1 oleg:1 back:1 feed:6 follow:1 response:3 zisserman:3 though:2 box:1 kaltenbrunner:1 anderson:1 just:1 stage:2 convnets:3 autoencoders:4 hand:3 replacing:1 nonlinear:1 marker:96 somehow:1 usage:1 effect:8 facilitate:1 counterpart:1 white:1 deal:1 visualizing:2 during:7 self:1 nuisance:4 plate:1 outline:1 impression:1 demonstrate:5 confusion:1 motion:3 interface:2 image:31 wise:2 consideration:1 variational:2 recently:1 sigmoid:2 functional:1 patent:3 discussed:1 interpretation:1 yosinski:1 elementwise:2 synthesized:2 measurement:1 cambridge:1 slant:1 ai:1 framed:1 consistency:1 grid:1 similarly:1 language:1 robot:3 recognizers:2 similarity:2 longer:1 etc:2 renderer:8 maxpooling:1 add:1 base:1 aligning:1 recent:2 perspective:3 reverse:1 scenario:4 certain:9 binary:1 success:3 arbitrarily:1 life:1 accomplished:1 victor:1 seen:2 zip:1 recognized:2 maximize:4 signal:5 desirable:1 positioning:1 adapt:5 long:1 bigger:2 paired:1 controlled:1 prediction:2 variant:1 vision:17 circumstance:4 arxiv:4 iteration:1 kernel:1 normalization:2 represent:1 robotics:3 achieved:3 c1:1 whereas:1 background:6 addition:1 grow:1 vadim:1 unlike:3 claus:1 pooling:1 tend:1 facilitates:1 simulates:1 mahendran:1 odds:2 extracting:1 curious:2 leverage:1 presence:1 feedforward:3 aesthetic:4 enough:1 easy:1 rendering:12 intermediate:4 bengio:2 fit:1 relu:1 automated:1 architecture:15 suboptimal:1 reduce:2 idea:1 prototype:6 inner:1 barcode:2 shift:4 motivated:1 handled:1 six:1 passed:1 accelerating:1 passing:1 deep:19 latency:1 woodland:1 transforms:6 amount:2 mid:1 visualized:1 processed:1 simplest:1 reduced:2 generate:4 designer:1 sign:2 correctly:1 bulk:1 threshold:1 drawn:1 kept:1 imaging:1 convert:2 jitter:1 you:1 distorted:1 springenberg:1 family:5 reader:1 electronic:1 separation:1 patch:2 comparable:1 bit:39 capturing:2 layer:23 followed:4 distinguish:1 display:1 fold:1 topological:1 encountered:1 identifiable:2 strength:1 adapted:1 incorporation:1 fei:2 expo:1 scene:2 ri:3 tag:3 generates:1 aspect:2 chair:1 graceful:1 optical:1 format:2 structured:1 according:2 combination:1 jr:1 smaller:1 modification:1 invariant:1 iccv:1 taken:1 pipeline:1 resource:1 visualization:2 discus:2 eventually:1 end:8 photo:1 briggs:1 denker:1 observe:1 away:1 enforce:1 batch:3 robustness:3 altogether:1 original:1 moscow:2 remaining:1 include:4 ensure:1 conceal:1 zeiler:2 readable:2 build:1 webcam:1 icra:2 society:1 objective:5 added:1 dependence:2 fiducial:5 traditional:1 xc2:1 gradient:1 convnet:3 capacity:9 decoder:4 penultimate:1 landmark:1 argue:1 assuming:1 code:17 length:6 modeled:2 mini:1 manzagol:1 balance:1 minimizing:1 potentially:1 synthesizing:1 design:11 implementation:1 perform:1 convolution:2 howard:1 descent:1 saturates:1 variability:5 rn:1 intensity:1 overlooked:1 inferred:1 superimpose:1 inverting:1 namely:1 pair:1 specified:1 c3:1 optimized:1 connection:1 learned:6 boser:1 barcelona:1 kingma:2 nip:2 able:1 bar:2 suggested:1 usually:1 pattern:13 below:2 mismatch:1 reliable:2 including:2 max:1 video:1 suitable:4 critical:1 natural:5 force:1 difficulty:1 advanced:1 localizes:1 technology:2 realvalued:1 vggnet:2 created:1 started:1 autoencoder:1 auto:1 text:2 prior:1 geometric:7 understanding:3 bending:2 embedded:3 loss:20 fully:2 interesting:2 generation:1 skolkovo:1 intrusive:1 remarkable:1 digital:1 foundation:1 agent:1 affine:7 sufficient:1 consistent:1 storing:1 pi:1 classifying:1 eccv:3 lo:1 friendliness:2 placed:1 last:1 keeping:1 free:1 bias:1 side:1 allow:1 institute:1 torsello:1 benefit:2 distributed:1 default:4 gram:5 computes:1 forward:6 adaptive:1 simplified:1 nguyen:1 welling:1 scharstein:1 compact:3 jaderberg:1 evoke:1 robotic:1 ioffe:1 b1:1 fergus:2 grayscale:1 table:1 reality:4 additionally:3 channel:1 transfer:1 robust:8 bypassed:1 inherently:1 composing:1 complex:2 european:1 constructing:1 dense:1 whole:1 noise:2 augmented:5 fig:1 aid:2 localizer:1 decoded:1 perceptual:1 third:1 printing:5 removing:1 embed:3 covariate:1 learnable:5 r2:1 experimented:1 workshop:1 adding:1 ci:2 texture:24 kx:1 demand:1 backpropagate:1 entropy:1 distinguishable:1 simply:1 appearance:4 likely:1 visual:20 partially:1 pretrained:4 chang:1 springer:2 corresponds:1 succeed:1 lempitsky:1 goal:1 towards:4 replace:1 content:1 change:1 typical:1 uniformly:1 diff:1 averaging:1 reducing:3 denoising:2 degradation:1 called:1 lens:1 gij:1 total:1 multimedia:1 bernard:1 exception:1 internal:1 latter:1 meant:1 evaluate:1 handling:1 |
5,885 | 6,324 | Local Maxima in the Likelihood of Gaussian Mixture
Models: Structural Results and Algorithmic
Consequences
Chi Jin
UC Berkeley
[email protected]
Yuchen Zhang
UC Berkeley
[email protected]
Martin J. Wainwright
UC Berkeley
[email protected]
Sivaraman Balakrishnan
Carnegie Mellon University
[email protected]
Michael I. Jordan
UC Berkeley
[email protected]
Abstract
We provide two fundamental results on the population (infinite-sample) likelihood
function of Gaussian mixture models with M ? 3 components. Our first main
result shows that the population likelihood function has bad local maxima even
in the special case of equally-weighted mixtures of well-separated and spherical
Gaussians. We prove that the log-likelihood value of these bad local maxima can
be arbitrarily worse than that of any global optimum, thereby resolving an open
question of Srebro [2007]. Our second main result shows that the EM algorithm
(or a first-order variant of it) with random initialization will converge to bad critical
points with probability at least 1 ? e??(M ) . We further establish that a first-order
variant of EM will not converge to strict saddle points almost surely, indicating that
the poor performance of the first-order method can be attributed to the existence of
bad local maxima rather than bad saddle points. Overall, our results highlight the
necessity of careful initialization when using the EM algorithm in practice, even
when applied in highly favorable settings.
1
Introduction
Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous
populations, as flexible models for multivariate density estimation and as models for clustering. Their
ability to model data as arising from underlying subpopulations provides essential flexibility in a
wide range of applications Titterington [1985]. This combinatorial structure also creates challenges
for statistical and computational theory, and there are many problems associated with estimation of
finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture
models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate
setting, and this setting will also be our focus in the current paper.
Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has
continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent
overview). More recent theoretical work has focused on issues related to the use of GMMs for the
density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001].
Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established
? the
surprising result that when the number of mixture components is unknown, then the standard n-rate
for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence
in these settings.
From an algorithmic perspective, the dominant practical method for estimating GMMs is the
Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent
method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of
the likelihood function. As such, there are no general guarantees for the quality of the estimate
produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore
various alternative algorithms which are computationally efficient, and for which rigorous statistical
guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005,
Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the
method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013].
Although general guarantees have not yet emerged, there has nonetheless been substantial progress
on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a
two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers.
They showed that this algorithm can be used to estimate Gaussian mixture components whose means
are separated by at least ?(d1/4 ). Balakrishnan et al. [2015] studied the local convergence of the
EM algorithm for a mixture of two Gaussians with ?(1)-separation. Their results show that global
optima have relatively large regions of attraction, but still require that the EM algorithm be provided
with a reasonable initialization in order to ensure convergence to a near globally optimal solution.
To date, computationally efficient algorithms for estimating a GMM provide guarantees under the
strong assumption that the samples come from a mixture of Gaussians?i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it
is important to understand the robustness of our algorithms to this assumption. In fact, maximum
likelihood has favorable properties in this regard?maximum-likelihood estimates are well known to
be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu,
1988]. This mathematical result motivates further study of EM and other likelihood-based methods
from the computational point of view. It would be useful to characterize when efficient algorithms
can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and
which retains the robustness properties of the maximum likelihood estimate.
In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For
this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function
is a global maximum in the limit of infinite samples?in other words, that there are no bad local
maxima for the population GMM likelihood function. This conjecture, if true, would provide strong
theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is
known [Am?ndola et al., 2015] that configurations of the samples can be constructed which lead to the
likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007]
avoids this by requiring that the samples come from the specified GMM, as well as by considering the
(infinite-sample-size) population setting. In the context of high-dimensional regression, it has been
observed that in some cases despite having a non-convex objective function, every local optimum of
the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright,
2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes
there are no bad local optima.
A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is
simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting
case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture
that there are no bad local optima. It is possible to visualize the setting when there are only two
components and to develop a more detailed understanding of the population likelihood surface.
Consider for instance a one-dimensional equally weighted unit variance GMM with true centers
??1 = ?4 and ??2 = 4, and consider the log-likelihood as a function of the vector ? : = (?1 , ?2 ).
Figure 1 shows both the population log-likelihood, ? 7? L(?), and the negative 2-norm of its
gradient, ? 7? ?k?L(?)k2 . Observe that the only local maxima are the vectors (?4, 4) and (4, ?4),
1
In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist
for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to
avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are
fixed.
2
which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle
point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient
is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence
of local maxima for the setting when M = 2 remains an open problem.
In the remainder of our paper, we focus our attention on the setting where there are more than two
mixture components and attempt to develop a broader understanding of likelihood surfaces for these
models, as well as the consequences for algorithms.
0
0
-50
-5
-100
-10
-150
-15
-200
-250
20
-20
20
10
10
0
0
20
10
-10
-20
20
10
-10
0
-10
0
-20
-20
(a)
-10
-20
(b)
Figure 1: Illustration of the likelihood and gradient maps for a two-component Gaussian mixture.
(a) Plot of population log-likehood map ? 7? L(?). (b) Plot of the negative Euclidean norm of the
gradient map ? 7? ?k?L(?)k2 .
Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a
GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose
population log-likelihood function contains local maxima. We further show that the log-likelihood of
these local maxima can be arbitrarily worse than that of the global maxima. This result immediately
implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a
global optimum from all possible starting points), even on well-separated mixtures of Gaussians.
The mere existence of bad local maxima is not a practical concern unless it turns out that natural
algorithms are frequently trapped in these bad local maxima. Our second main result shows that
the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random
initialization, converges to a bad critical point with an exponentially high probability. In more detail,
we consider the following practical scheme for parameter estimation in an M -component Gaussian
mixture:
(a) Draw M i.i.d. points ?1 , . . . , ?M uniformly at random from the sample set.
(b) Run the EM or first-order EM algorithm to estimate the model parameters, using ?1 , . . . , ?M
as the initial centers.
We note that in the limit of infinite samples, the initialization scheme we consider is equivalent
to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a
universal constant c > 0, with probability at least 1 ? e?cM , the EM and first-order EM algorithms
converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of
the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this
initialization scheme, one needs repeat the above scheme exponentially many (in M ) times, and then
select the solution with highest log-likelihood. This result strongly indicates that repeated random
initialization followed by local search (via either EM or its first order variant) can fail to produce
useful estimates under reasonable constraints on computational complexity.
We further prove that under the same random initialization scheme, the first-order EM algorithm with
a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly
suggests that the failure of local search methods for the GMM model is due mainly to the existence
of bad local optima, and not due to the presence of (strict) saddle points.
3
Our proofs introduce new techniques to reason about the structure of the population log-likelihood,
and in particular to show the existence of bad local optima. We expect that these general ideas will
aid in developing a better understanding of the behavior of algorithms for non-convex optimization.
From a practical standpoint, our results strongly suggest that careful initialization is required for local
search methods, even in large-sample settings, and even for extremely well-behaved mixture models.
The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM
algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we
state our main theoretical results and develop some of their implications. Section A is devoted to the
proofs of our results, with some of the more technical aspects deferred to the appendices.
2
Background and Preliminaries
In this section, we formally define the Gaussian mixture model that we study in the paper. We then
describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization
that we analyze. Throughout the paper, we use [M ] to denote the set {1, 2, ? ? ? , M }, and N (?, ?) to
denote the d-dimensional Gaussian distribution with mean vector ? and covariance matrix ?. We use
?(? | ?, ?) to denote the probability density function of the Gaussian distribution with mean vector ?
and covariance matrix ?:
> ?1
1
1
?(x | ?, ?) := p
e? 2 (x??) ? (x??) .
(1)
d
(2?) det(?)
2.1
Gaussian Mixture Models
A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection ?? = {??i , . . . , ??M } of d-dimensional mean vectors, a vector ?? = (??1 , . . . , ??M ) of nonnegative mixture weights that sum to one, and a collection ?? = {??1 , . . . , ??M } of covariance
matrices. Given these parameters, the density function of a Gaussian mixture model takes the form
p(x | ?? , ?? , ?? ) =
M
X
??i ?(x | ??i , ??i ),
i=1
where the Gaussian density function ? was previously defined in equation (1). In this paper, we focus
on the idealized situation in which every mixture component is equally weighted, and the covariance
of each mixture component is the identity. This leads to a mixture model of the form
p(x | ?? ) :=
M
1 X
?(x | ??i , I),
M i=1
(2)
which we denote by GMM(?? ). In this case, the only parameters to be estimated are the mean
vectors ?? = {??i }M
i=1 of the M components.
The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation
between the mean vectors. More precisely, for a given parameter ? > 0, we say that the GMM(?? )model is ?-separated if
k??i ? ??j k2 ? ?, for all distinct pairs i, j ? [M ].
(3)
?
We say that the mixture is well-separated if condition (3) holds for some ? = ?( d).
Suppose that we observe an i.i.d. sequence {x` }n`=1 drawn according to the distribution GMM(?? ),
and our goal is to estimate the unknown collection of mean vectors ?? . The sample-based loglikelihood function Ln is given by
n
M
1 X
1X
Ln (?) :=
log
?(x` | ?i , I) .
(4a)
n
M i=1
`=1
As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood
function L given by
!
M
1 X
L(?) = E?? log
?(X | ?i , I) .
(4b)
M i=1
4
Here E?? denotes expectation taken over the random vector X drawn according to the model
GMM(?? ).
A straightforward implication of the positivity of the KL divergence is that the population likelihood
function is in fact maximized at ?? (along with permutations thereof, depending on how we index
the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this
population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima.
In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of
M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has
arbitrarily bad local optima.
2.2
Expectation-Maximization Algorithm
A natural way to estimate the mean vectors ?? is by attempting to maximize the sample log-likelihood
defined by the samples {x` }n`=1 . For a non-degenerate Gaussian mixture model, the log-likelihood
is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm
proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating
between two steps:
1. E-step: For each i ? [M ] and ` ? [n], compute the membership weight
?(x` | ?i , I)
wi (x` ) = PM
.
j=1 ?(x` | ?j , I)
2. M-step: For each i ? [M ], update the mean ?i vector via
Pn
new
i=1 wi (x` ) x`
.
?i = P
n
`=1 wi (x` )
In the population setting, the M-step becomes:
?new
=
i
E?? [wi (X) X]
.
E?? [wi (X)]
(5)
Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid
of the samples for appropriately chosen weights.
First-order EM updates: For a general latent variable model with observed variables X = x,
latent variables Z and model parameters ?, by Jensen?s inequality, the log-likelihood function can be
lower bounded as
log P(x | ?0 ) ? EZ?P(?|x;?) log P(x, Z | ?0 ) ?EZ?P(?|x;?) log P(Z | x; ?0 ).
|
{z
}
:=Q(? 0 |?)
Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives:
?new := arg max
Q(?0 | ?)
0
?
There are many variants of the EM algorithm which rely instead on partial updates at each iteration
instead of finding the exact optimum of Q(?0 | ?). One important example, analyzed in the work
of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a
step along the gradient of the function Q(?0 | ?) (with respect to its first argument) in each iteration.
Concretely, given a step size s > 0, the first-order EM updates can be written as:
?new = ? + s??0 Q(?0 | ?) |?0 =? .
In the case of the model GMM(?? ), the gradient EM updates on the population objective take the
form
?new
= ?i + s E?? wi (X)(X ? ?i ) .
(6)
i
This update turns out to be equivalent to gradient ascent on the population likelihood L with step size
s > 0 (see the paper Balakrishnan et al. [2015] for details).
5
2.3
Random Initialization
Since the log-likelihood function is non-concave, the point to which the EM algorithm converges
depends on the initial value of ?. In practice, it is standard to choose these values by some form
of random initialization. For instance, one method is to to initialize the mean vectors by sampling
uniformly at random from the data set {x` }n`=1 . This scheme is intuitively reasonable, because
it automatically adapts to the locations of the true centers. If the true centers have large mutual
distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate
in a small region of the space, then the initialized centers will also be close to each other. In practice,
initializing ? by uniformly drawing from the data is often more reasonable than drawing ? from a
fixed distribution.
In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the
above practical initialization scheme of selecting ? uniformly at random from the sample set. In
the idealized population setting, this is equivalent to sampling the initial values of ? i.i.d from the
distribution GMM(?? ). Throughout this paper, we refer to this particular initialization strategy as
random initialization.
3
Main results
We now turn to the statements of our main results, along with a discussion of some of their consequences.
3.1
Structural properties
In our first main result (Theorem 1), for any M ? 3, we exhibit an M -component mixture of
Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum
whose log-likelihood is arbitrarily worse than that attained by the true parameters ?? . This result
provides a negative answer to the conjecture of Srebro [2007].
Theorem 1. For any M ? 3 and any constant Cgap > 0, there is a well-separated uniform mixture
of M unit-variance spherical Gaussians GMM(?? ) and a local maximum ?0 such that
L(?0 ) ? L(?? ) ? Cgap .
In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our
construction for M = 3. Suppose that the true centers ??1 , ??2 and ??3 , are such that the distance
between ??1 and ??2 is much smaller than the respective distances from ??1 to ??3 , and from ??2 to
??3 . Now, consider the point ? := (?1 , ?2 , ?3 ) where ?1 = (??1 + ??2 )/2; the points ?2 and ?3 are
both placed at the true center ??3 . This assignment does not maximize the population log-likelihood,
because only one center is assigned to the two Gaussian components centered at ??1 and ??2 , while
two centers are assigned to the Gaussian component centered at ??3 . However, when the components
are well-separated we are able to show that there is a local maximum in the neighborhood of this
configuration. In order to establish the existence of a local maximum, we first define a neighborhood
of this configuration ensuring that it does not contain any global maximum, and then prove that the
log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal
configuration ?. Since the log-likelihood is bounded from above, this neighborhood must contain at
least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by
construction, any maximum in this neighborhood must be a local maximum. See Section A for a
detailed proof.
3.2
Algorithmic consequences
An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent,
that attempts to maximize the likelihood based on local updates cannot be globally convergent?that
is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if
any such algorithm is initialized at the local maximum, then they will remain trapped. However,
one might argue that this conclusion is overly pessimistic, in that we have only shown that these
algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence
of bad local minima need not be a practical concern unless it can be shown that a typical optimization
6
algorithm will frequently converge to one of them. The following result shows that the EM algorithm,
when applied to the population likelihood and initialized according to the random scheme described
in Section 2.2, converges to a bad critical point with high probability.
Theorem 2. Let ?t be the tth iterate of the EM algorithm initialized by the random initialization
scheme described previously. There exists a universal constant c, for any M ? 3 and any constant
Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians
GMM(?? ) with
P ?t ? 0, L(?t ) ? L(?? ) ? Cgap ? 1 ? e?cM .
Theorem 2 shows that, for the specified configuration ?? , the probability of success for the EM
algorithm is exponentially small as a function of M . As a consequence, in order to guarantee
recovering a global maximum with at least constant probability, the EM algorithm with random
initialization must be executed at least e?(M ) times. This result strongly suggests that that effective
initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra
and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs.
The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped
into two clusters that are extremely far apart, and suppose further that we initialize all the centers in
the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster.
In this situation, all centers will remain trapped within the cluster in which they were first initialized,
irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only
favorable initialization schemes (from which convergence to a global maximum is possible) are those
in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2)
the number of centers initialized within each cluster of true centers exactly matches the number of
true centers in that cluster. However, this observation alone only suffices to guarantee that the success
probability is polynomially small in M .
In order to demonstrate that the success probability is exponentially small in M , we need to further
refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive
structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each
cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so
on. We can repeat this structure for ?(log M ) levels. For this GMM instance, even in the case where
the number of true centers exactly matches the number of initialized centers in each cluster at the top
level, we still need to consider the configuration of the initial centers within the mini-clusters, which
further reduces the probability of success for a random initialization. A straightforward calculation
then shows that the probability of a favorable random initialization is on the order of e??(M ) . The
full proof is given in Section A.2.
We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result
in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem
for the first-order EM algorithm, provided the step-size is not chosen too aggressively.
Theorem 3. Let ?t be the tth iterate of the first-order EM algorithm with stepsize s ? (0, 1),
initialized by the random initialization scheme described previously. There exists a universal constant
c, for any M ? 3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of
M unit-variance spherical Gaussians GMM(?? ) with
P ?t ? 0, L(?t ) ? L(?? ) ? Cgap ? 1 ? e?cM .
(7)
We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal
choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall
that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function.
As a consequence, we can conclude that the most natural local search heuristics for maximizing
the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when
initialized randomly, unless we repeat this procedure exponentially many (in M ) times.
Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting.
Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty
is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however,
we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016]
to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely:
7
Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical
point xss of function f is a strict saddle point if the Hessian ?2 f (xss ) has at least one strictly
positive eigenvalue.
With this definition, we have the following:
Theorem 4. Let ?t be the tth iterate of the first-order EM algorithm with constant stepsize s ? (0, 1),
and initialized by the random initialization scheme described previously. Then for any M -component
mixture of spherical Gaussians:
(a) The iterates ?t converge to a critical point of the log-likelihood.
(b) For any strict saddle point ?ss , we have P (limt?? ?t = ?ss ) = 0.
Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the
first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the
first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict)
saddle-points.
The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on
the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem
from dynamical systems theory, and, applied directly to our setting, would require establishing that
the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see
Section A.4 for the details. The proof technique makes use of specific properties of the first-order
EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for
the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem
will be needed to establish such a result.
4
Conclusion and open problems
In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of
arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in
the idealized situation where each component is uniformly weighted, spherical with unit variance,
and well-separated. We further provided some evidence that even in this favorable setting random
initialization schemes for the population EM algorithm are likely to fail with high probability. Our
results carry over in a straightforward way, via standard empirical process arguments, to settings
where a large finite sample is provided.
An interesting open question is to resolve the necessity of at least three mixture components in our
constructions. In particular, we believe that at least three mixture components are necessary for the
log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM
algorithm with a random initialization is in fact successful with high probability.
In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem
which we hope to address in future work.
Acknowledgements
This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air
Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science
program of the Office of Naval Research under grant number N00014-15-1-2670, and National
Science Foundation Grant CIF-31712-23800.
References
Elizabeth S Allman, Catherine Matias, and John A Rhodes. Identifiability of parameters in latent structure
models with many observed variables. Annals of Statistics, 37(6A):3099?3132, 2009.
Carlos Am?ndola, Mathias Drton, and Bernd Sturmfels. Maximum likelihood estimates for Gaussian mixtures
are transcendental. In International Conference on Mathematical Aspects of Computer and Information
Sciences, pages 579?590. Springer, 2015.
8
Sanjeev Arora, Ravi Kannan, et al. Learning mixtures of separated nonspherical Gaussians. The Annals of
Applied Probability, 15(1A):69?92, 2005.
Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm: From
population to sample-based analysis. Annals of Statistics, 2015.
Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In 51st Annual IEEE Symposium
on Foundations of Computer Science, pages 103?112. IEEE, 2010.
Kamalika Chaudhuri and Satish Rao. Learning mixtures of product distributions using correlations and
independence. In 21st Annual Conference on Learning Theory, volume 4, pages 9?1, 2008.
Jiahua Chen. Optimal rate of convergence for finite mixture models. Annals of Statistics, 23(1):221?233, 1995.
Sanjoy Dasgupta and Leonard Schulman. A probabilistic analysis of EM for mixtures of separated, spherical
Gaussians. Journal of Machine Learning Research, 8:203?226, 2007.
Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977.
David L Donoho and Richard C Liu. The ?automatic? robustness of minimum distance functionals. Annals of
Statistics, 16(2):552?586, 1988.
Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient for
tensor decomposition. In 28th Annual Conference on Learning Theory, pages 797?842, 2015.
Christopher R Genovese and Larry Wasserman. Rates of convergence for the Gaussian mixture sieve. Annals of
Statistics, 28(4):1105?1127, 2000.
Subhashis Ghosal and Aad W Van Der Vaart. Entropies and rates of convergence for maximum likelihood and
Bayes estimation for mixtures of normal densities. Annals of Statistics, 29(5):1233?1263, 2001.
Nhat Ho and XuanLong Nguyen. Identifiability and optimal rates of convergence for parameters of multiple
types in finite mixtures. arXiv preprint arXiv:1501.02497, 2015.
Daniel Hsu and Sham M Kakade. Learning mixtures of spherical Gaussians: Moment methods and spectral
decompositions. In Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, pages
11?20. ACM, 2013.
Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers.
In 29th Annual Conference on Learning Theory, pages 1246?1257, 2016.
Po-Ling Loh and Martin J Wainwright. Regularized M-estimators with nonconvexity: Statistical and algorithmic
theory for local optima. In Advances in Neural Information Processing Systems, pages 476?484, 2013.
Ankur Moitra and Gregory Valiant. Settling the polynomial learnability of mixtures of Gaussians. In 51st Annual
IEEE Symposium on Foundations of Computer Science, pages 93?102. IEEE, 2010.
Ioannis Panageas and Georgios Piliouras. Gradient descent converges to minimizers: The case of non-isolated
critical points. arXiv preprint arXiv:1605.00405, 2016.
Razvan Pascanu, Yann N Dauphin, Surya Ganguli, and Yoshua Bengio. On the saddle point problem for
non-convex optimization. arXiv preprint arXiv:1405.4604, 2014.
Nathan Srebro. Are there local maxima in the infinite-sample likelihood of Gaussian mixture estimation? In
20th Annual Conference on Learning Theory, pages 628?629, 2007.
Henry Teicher. Identifiability of finite mixtures. The Annals of Mathematical Statistics, 34(4):1265?1269, 1963.
D Michael Titterington. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985.
Santosh Vempala and Grant Wang. A spectral algorithm for learning mixtures of distributions. In The 43rd
Annual IEEE Symposium on Foundations of Computer Science, pages 113?122. IEEE, 2002.
Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for sparse
nonconvex learning problems. Annals of Statistics, 42(6):2164, 2014.
9
| 6324 |@word version:1 polynomial:2 achievable:2 norm:2 suitably:1 open:8 covariance:4 decomposition:2 thereby:1 carry:1 moment:3 initial:5 liu:3 configuration:6 contains:1 selecting:2 series:1 daniel:1 necessity:2 wainwrig:1 current:1 surprising:1 yet:1 written:1 must:3 john:1 transcendental:1 benign:1 plot:2 update:10 stationary:2 generative:2 alone:1 isotropic:1 vanishing:1 fa9550:1 provides:2 pascanu:2 iterates:1 location:2 zhang:2 unbounded:1 mathematical:4 along:3 constructed:1 symposium:3 yuan:1 prove:3 fitting:1 inside:1 introduce:2 theoretically:1 indeed:3 proliferation:1 behavior:1 frequently:3 chi:2 globally:3 spherical:10 automatically:1 resolve:1 considering:1 becomes:1 provided:4 spain:1 underlying:3 estimating:3 bounded:2 cm:3 titterington:2 finding:2 guarantee:7 berkeley:8 every:2 concave:2 exactly:5 k2:3 unit:6 grant:5 positive:1 local:40 tends:1 limit:2 consequence:6 despite:2 establishing:1 might:1 initialization:27 studied:4 ankur:1 conversely:2 suggests:3 yuczhang:1 range:1 statistically:1 practical:6 practice:5 recursive:1 razvan:1 procedure:1 empirical:4 universal:3 word:1 subpopulation:1 regular:1 donald:1 suggest:1 cannot:4 close:1 context:1 restriction:1 equivalent:3 map:3 center:27 maximizing:3 straightforward:3 attention:2 starting:1 convex:3 focused:1 subhashis:1 immediately:1 wasserman:2 estimator:2 continued:1 attraction:1 utilizing:1 population:26 variation:1 justification:1 annals:9 construction:4 suppose:4 exact:2 muri:1 vein:1 observed:3 preprint:3 wang:4 initializing:1 region:2 highest:1 substantial:1 intuition:2 dempster:2 benjamin:1 complexity:1 rigorously:1 creates:1 basis:1 resolved:1 po:1 various:1 separated:16 distinct:1 describe:1 effective:1 neighborhood:8 whose:5 emerged:1 widely:1 heuristic:1 say:3 loglikelihood:1 drawing:2 s:2 ability:1 statistic:8 vaart:2 laird:1 final:1 seemingly:1 online:1 sequence:1 eigenvalue:1 simchowitz:1 maximal:1 product:1 remainder:3 date:1 flexibility:1 chaudhuri:2 degenerate:1 adapts:1 poorly:1 description:1 convergence:14 cluster:12 optimum:13 produce:1 converges:7 depending:1 develop:3 illustrate:1 stat:1 progress:1 strong:3 recovering:1 c:2 come:2 implies:1 concentrate:1 direction:1 stochastic:1 centered:2 larry:1 bin:1 require:2 suffices:1 investigation:1 preliminary:1 pessimistic:1 strictly:2 rong:1 hold:3 sufficiently:1 normal:1 algorithmic:5 visualize:1 claim:1 early:1 favorable:6 estimation:7 rhodes:1 combinatorial:1 sivaraman:2 grouped:3 weighted:6 hope:1 gaussian:27 rather:2 avoid:2 pn:1 broader:1 office:3 focus:5 naval:2 likelihood:56 indicates:1 mainly:1 rigorous:1 centroid:1 am:2 sense:1 ganguli:1 minimizers:2 attract:1 membership:1 spurious:1 overall:1 issue:3 flexible:1 arg:1 dauphin:1 special:1 initialize:2 uc:4 mutual:1 santosh:1 construct:2 never:1 having:3 sampling:2 identical:1 adversarially:1 yu:1 genovese:2 nearly:1 future:1 yoshua:1 richard:1 belkin:2 randomly:1 divergence:1 national:1 attempt:2 drton:1 interest:1 highly:1 deferred:1 mixture:62 analyzed:2 devoted:1 implication:3 accurate:1 partial:1 necessary:1 arthur:1 respective:1 unless:3 incomplete:1 yuchen:1 euclidean:1 initialized:13 isolated:1 theoretical:5 fitted:3 sinha:2 instance:4 rao:2 retains:1 assignment:1 maximization:3 uniform:4 dod:1 successful:1 satish:1 too:1 learnability:1 characterize:1 answer:2 gregory:1 st:3 density:6 fundamental:1 international:1 recht:1 lee:3 probabilistic:1 michael:3 sanjeev:1 satisfied:1 moitra:3 opposed:1 choose:1 huang:1 positivity:1 worse:4 style:1 ioannis:1 zhaoran:1 idealized:3 depends:2 view:1 jason:1 analyze:2 reached:1 bayes:1 carlos:1 identifiability:4 contribution:1 air:1 variance:6 maximized:1 weak:1 produced:1 mere:2 researcher:1 definition:2 failure:3 nonetheless:1 matias:1 involved:1 thereof:2 associated:1 attributed:1 proof:8 hsu:3 pilot:1 treatment:1 recall:1 siva:1 organized:1 reflecting:1 focusing:2 furong:1 attained:1 follow:1 strongly:4 correlation:1 christopher:1 quality:1 behaved:3 scientific:1 cgap:6 believe:1 requiring:1 true:16 contain:2 sieve:1 assigned:2 aggressively:1 alternating:1 leibler:1 satisfactory:1 iteratively:1 round:1 kaushik:1 generalized:1 demonstrate:1 geometrical:1 meaning:1 overview:1 exponentially:5 volume:1 mellon:1 significant:1 refer:1 smoothness:1 automatic:1 rd:1 trivially:1 pm:1 henry:1 stable:2 han:1 surface:2 dominant:1 multivariate:2 recent:6 showed:1 perspective:1 conjectured:2 optimizing:1 apart:2 catherine:1 certain:1 n00014:1 nonconvex:1 inequality:1 arbitrarily:6 success:5 der:2 minimum:2 surely:1 converge:9 maximize:4 resolving:2 full:1 multiple:1 sham:1 reduces:1 smooth:1 technical:1 match:2 characterized:1 calculation:1 equally:3 ensuring:2 variant:8 regression:1 heterogeneous:1 cmu:1 expectation:3 metric:1 arxiv:6 iteration:2 limt:1 addition:1 background:1 standpoint:1 appropriately:1 extra:1 strict:8 ascent:5 suspect:1 balakrishnan:6 gmms:10 jordan:3 structural:2 allman:2 near:2 presence:2 leverage:1 yang:1 bengio:1 variety:1 iterate:3 independence:1 suboptimal:1 escaping:1 idea:2 det:1 loh:2 cif:1 hessian:1 useful:2 detailed:2 xuanlong:1 amount:1 sturmfels:1 tth:3 nonspherical:1 exist:2 estimated:2 arising:1 trapped:3 overly:1 panageas:3 broadly:1 carnegie:1 dasgupta:3 key:1 demonstrating:1 drawn:2 gmm:17 ravi:1 nonconvexity:1 sum:1 run:1 almost:1 reasonable:4 throughout:2 family:1 yann:1 separation:2 draw:1 appendix:1 bound:2 guaranteed:1 followed:1 convergent:1 nan:1 refine:1 nonnegative:1 repose:1 annual:7 constraint:1 precisely:2 infinity:1 aspect:2 nathan:1 argument:3 extremely:2 attempting:2 vempala:2 martin:3 relatively:1 conjecture:6 developing:1 according:3 poor:2 smaller:2 remain:2 em:58 elizabeth:1 wi:6 kakade:3 intuitively:3 taken:1 computationally:2 equation:1 ln:2 remains:3 previously:4 turn:3 fail:4 needed:1 ge:2 wellspecified:1 gaussians:19 observe:2 spectral:2 stepsize:3 alternative:1 robustness:3 ho:2 existence:10 denotes:1 clustering:2 ensure:1 remaining:1 likehood:1 top:2 chijin:1 establish:3 society:1 tensor:1 objective:3 question:4 parametric:1 strategy:1 devote:1 exhibit:2 gradient:16 distance:5 manifold:2 argue:2 reason:1 kannan:1 index:1 illustration:1 mini:2 innovation:1 executed:1 statement:1 negative:4 motivates:1 unknown:2 observation:1 finite:9 jin:2 descent:2 situation:3 perturbation:1 arbitrary:1 ghosal:2 david:1 pair:1 required:1 specified:3 kl:1 bernd:1 established:1 barcelona:1 nip:1 address:1 able:1 proceeds:1 dynamical:1 challenge:1 program:1 max:2 royal:1 wainwright:4 critical:9 suitable:1 natural:3 difficulty:2 rely:1 force:1 regularized:1 settling:1 scheme:15 arora:2 irrespective:1 understanding:4 schulman:3 acknowledgement:1 asymptotic:1 afosr:1 georgios:1 expect:2 highlight:1 permutation:1 interesting:2 srebro:9 foundation:4 rubin:1 repeat:3 placed:1 supported:1 understand:1 aad:1 piliouras:3 wide:2 fall:1 mikhail:1 sparse:1 van:2 regard:1 boundary:1 dimension:2 avoids:2 concretely:1 collection:3 nguyen:2 far:2 polynomially:1 functionals:1 pruning:1 kullback:1 global:13 conclude:1 surya:1 search:5 latent:3 iterative:1 robust:1 main:7 ling:1 teicher:2 repeated:1 scattered:1 aid:1 wiley:1 tong:1 sub:3 lie:1 theorem:14 bad:20 specific:1 jensen:1 x:2 evidence:3 concern:3 essential:1 exists:2 false:1 valiant:3 kamalika:1 chen:2 jiahua:1 entropy:1 led:1 ndola:2 simply:1 saddle:13 explore:1 likely:2 ez:2 partially:1 springer:1 acm:1 identity:1 goal:1 viewed:1 donoho:2 careful:2 leonard:1 infinite:6 typical:1 uniformly:6 principal:1 sanjoy:1 mathias:1 meaningful:1 indicating:1 select:1 formally:2 support:2 d1:1 |
5,886 | 6,325 | Hierarchical Clustering via Spreading Metrics
Aurko Roy1 and Sebastian Pokutta2
1
College of Computing, Georgia Institute of Technology, Atlanta, GA, USA.
Email: [email protected]
2
ISyE, Georgia Institute of Technology, Atlanta, GA, USA.
Email: [email protected]
Abstract
We study the cost function for hierarchical clusterings introduced by [16] where
hierarchies are treated as first-class objects rather than deriving their cost from
projections into flat clusters. It was also shown in [16] that a top-down algorithm
returns a hierarchical clustering of cost at most O (?n log n) times the cost of
the optimal hierarchical clustering, where ?n is the approximation ratio of the
Sparsest Cut subroutine used. Thus using the best known approximation algorithm
for Sparsest Cut due to Arora-Rao-Vazirani,
the top-down algorithm returns a
3/2
hierarchical clustering of cost at most O log n times the cost of the optimal
solution. We improve this by giving an O(log n)-approximation algorithm for this
problem. Our main technical ingredients are a combinatorial characterization of
ultrametrics induced by this cost function, deriving an Integer Linear Programming
(ILP) formulation for this family of ultrametrics, and showing how to iteratively
round an LP relaxation of this formulation by using the idea of sphere growing
which has been extensively used in the context of graph partitioning. We also prove
that our algorithm returns an O(log n)-approximate hierarchical clustering for a
generalization of this cost function also studied in [16]. We also give constant
factor inapproximability results for this problem.
1
Introduction
Hierarchical clustering is an important method in cluster analysis where a data set is recursively
partitioned into clusters of successively smaller size. They are typically represented by rooted trees
where the root corresponds to the entire data set, the leaves correspond to individual data points and
the intermediate nodes correspond to a cluster of its descendant leaves. Such a hierarchy represents
several possible flat clusterings of the data at various levels of granularity; indeed every pruning of
this tree returns a possible clustering. Therefore in situations where the number of desired clusters is
not known beforehand, a hierarchical clustering scheme is often preferred to flat clustering.
The most popular algorithms for hierarchical clustering are bottoms-up agglomerative algorithms
like single linkage, average linkage and complete linkage. In terms of theoretical guarantees these
algorithms are known to correctly recover a ground truth clustering if the similarity function on the
data satisfies corresponding stability properties (see, e.g., [5]). Often, however, one wishes to think of
a good clustering as optimizing some kind of cost function rather than recovering a hidden ?ground
truth?. This is the standard approach in the classical clustering setting where popular objectives are
k-means, k-median, min-sum and k-center (see Chapter 14, [23]). However as pointed out by [16]
for a lot of popular hierarchical clustering algorithms including linkage based algorithms, it is hard
to pinpoint explicitly the cost function that these algorithms are optimizing. Moreover, much of the
existing cost function based approaches towards hierarchical clustering evaluate a hierarchy based
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
on a cost function for flat clustering, e.g., assigning the k-means or k-median cost to a pruning of
this tree. Motivated by this, [16] introduced a cost function for hierarchical clustering where the cost
takes into account the entire structure of the tree rather than just the projections into flat clusterings.
This cost function is shown to recover the intuitively correct hierarchies on several synthetic examples
like planted partitions and cliques. In addition, a top-down graph partitioning algorithm is presented
that outputs a tree with cost at most O(?n log n) times the cost of the optimal tree and where ?n
is the approximation guarantee of the Sparsest Cut subroutine used. Thus using the Leighton-Rao
2
algorithm
[33] orthe Arora-Rao-Vazirani algorithm [3] gives an approximation factor of O log n
and O log3/2 n respectively.
In this work we give a polynomial time algorithm to recover a hierarchical clustering of cost at most
O(log n) times the cost of the optimal clustering according to this cost function. We also analyze
a generalization of this cost function studied by [16] and show that our algorithm still returns an
O(log n) approximate clustering in this setting. We do this by giving a combinatorial characterization
of the ultrametrics induced by this cost function, writing a convex relaxation for it and showing how
to iteratively round a fractional solution into an integral one using a rounding scheme used in graph
partitioning algorithms. We also implement the integer program, its LP relaxation, and the rounding
algorithm and test it on some synthetic and real world data sets to compare the cost of the rounded
solutions to the true optimum, as well as to compare its performance to other hierarchical clustering
algorithms used in practice. Our experiments suggest that the hierarchies found by this algorithm are
often better than the ones found by linkage based algorithms as well as the k-means algorithm in
terms of the error of the best pruning of the tree compared to the ground truth. We conclude with
constant factor hardness results for this problem.
1.1
Related Work
The immediate precursor to this work is [16] where the cost function for evaluating a hierarchical
clustering was introduced. Prior to this there has been a long line of research on hierarchical
clustering in the context of phylogenetics and taxonomy (see, e.g., [22]). Several authors have also
given theoretical justifications for the success of the popular linkage based algorithms for hierarchical
clustering (see, e.g. [1]). In terms of cost functions, one approach has been to evaluate a hierarchy in
terms of the k-means or k-median cost that it induces (see [17]). The cost function and the top-down
algorithm in [16] can also be seen as a theoretical justification for several graph partitioning heuristics
that are used in practice.
LP relaxations for hierarchical clustering have also been studied in [2] where the objective is to fit
a tree metric to a data set given pairwise dissimilarities. Another work that is indirectly related to
our approach is [18] where an ILP was studied in the context of obtaining the closest ultrametric to
arbitrary functions on a discrete set. Our approach is to give a combinatorial characterization of the
ultrametrics induced by the cost function of [16] which allows us to use the tools from [18] to model
the problem as an ILP. The natural LP relaxation of this ILP turns out to be closely related to LP
relaxations considered before for several graph partitioning problems (see, e.g., [33, 19, 32]) and we
use a rounding technique studied in this context to round this LP relaxation.
Recently, we became aware of independent work by Charikar and Chatziafratis [12] obtaining?similar
results for hierarchical clustering. In particular they improve the approximation factor to O log n
by showing how to round a spreading metric SDP relaxation for this cost function. They also analyze
a similar LP relaxation using the divide-and-conquer approximation algorithms using spreading
metrics paradigm of [20] together with a result of [7] to prove an O(log n) approximation. Finally,
they also give similar inapproximability results for this problem.
2
Preliminaries
A similarity based clustering problem consists of a dataset V of n points and a similarity function
? : V ? V ? R such that ?(i, j) is a measure of the similarity between i and j for any i, j ? V . We
will assume that the similarity function is symmetric, i.e., ?(i, j) = ?(j, i) for every i, j ? V . We
also require ? ? 0 as in [16]; see supplementary material for a discussion. Note that we do not make
any assumptions about the points in V coming from an underlying metric space. For a given instance
of a clustering problem we have an associated weighted complete graph Kn with vertex set V and
2
weight function given by ?. A hierarchical clustering of V is a tree T with a designated root r and
with the elements of V as its leaves, i.e., leaves(T ) = V . For any set S ? V we denote the lowest
common ancestor of S in T by lca(S). For pairs of points i, j ? V we will abuse the notation for
the sake of simplicity and denote lca({i, j}) simply by lca(i, j). For a node v of T we denote the
subtree of T rooted at v by T [v]. The following cost function was introduced by [16] to measure the
quality of the hierarchical clustering T
X
cost(T ) :=
?(i, j) |leaves(T [lca(i, j)])| .
(1)
{i,j}?E(Kn )
The intuition behind this cost function is as follows. Let T be a hierarchical clustering with designated
root r so that r represents the whole data set V . Since leaves(T ) = V , every internal node v ? T
represents a cluster of its descendant leaves, with the leaves themselves representing singleton clusters
of V . Starting from r and going down the tree, every distinct pair of points i, j ? V will be eventually
separated at the leaves. If ?(i, j) is large, i.e., i and j are very similar to each other then we would
like them to be separated as far down the tree as possible if T is a good clustering of V . This is
enforced in the cost function (1): if ?(i, j) is large then the number of leaves of lca(i, j) should be
small, i.e., lca(i, j) should be far from the root r of T .
Under the cost function (1), one can interpret the tree T as inducing an ultrametric dT on V given by
dT (i, j) := |leaves(T [lca (i, j)])| ? 1. This is an ultrametric since dT (i, j) = 0 iff i = j and for any
triple i, j, k ? V we have dT (i, j) ? max{dT (i, k), dT (j, k)}. The following definition introduces
the notion of non-trivial ultrametrics. These turn out to be precisely the ultrametrics that are induced
by tree decompositions of V corresponding to cost function (1), as we will show in Lemma 5.
Definition 1. An ultrametric d on a set of points V is non-trivial if the following conditions hold.
1. For every non-empty set S ? V , there is a pair of points i, j ? S such that d(i, j) ? |S| ? 1.
2. For any t if St is an equivalence class of V under the relation i ? j iff d(i, j) ? t, then
maxi,j?St d(i, j) ? |St | ? 1.
Note that for an equivalence class St where d(i, j) ? t for every i, j ? St it follows from Condition 1
that t ? |St | ? 1. Thus in the case when t = |St | ? 1 the two conditions imply that the maximum
distance between any two points in S is t and that there is a pair i, j ? S for which this maximum
is attained. The following lemma shows that non-trivial ultrametrics behave well under restrictions
to equivalence classes St of the form i ? j iff d(i, j) ? t. Due to page limitation full proofs are
included in the supplementary material.
Lemma 2. Let d be a non-trivial ultrametric on V and let St ? V be an equivalence class under the
relation i ? j iff d(i, j) ? t. Then d restricted to St is a non-trivial ultrametric on St .
The intuition behind the two conditions in Definition 1 is as follows. Condition 1 imposes a certain
lower bound by ruling out trivial ultrametrics where, e.g., d(i, j) = 1 for every distinct pair i, j ? V .
On the other hand Condition 2 discretizes and imposes an upper bound on d by restricting its range
to the set {0, 1, . . . , n ? 1} (see Lemma 3). This rules out the other spectrum of triviality where for
example d(i, j) = n for every distinct pair i, j ? V with |V | = n.
Lemma 3. Let d be a non-trivial ultrametric on the set V . Then the range of d is contained in the set
{0, 1, . . . , n ? 1} with |V | = n.
3
Ultrametrics and Hierarchical Clusterings
In this section we study the combinatorial properties of the ultrametrics induced by cost function (1).
We start with the following easy lemma showing that if a subset S ? V has r as its lowest common
ancestor, then there must be a pair of points i, j ? S for which r = lca(i, j).
Lemma 4. Let S ? V of size ? 2. If r = lca(S) then there is a pair i, j ? S such that lca(i, j) = r.
The following lemma shows that non-trivial ultrametrics exactly capture the ultrametrics that are
induced by tree decompositions of V using cost function (1). The proof of Lemma 5 is inductive and
uses Lemma 4 as a base case. As it turns out, the inductive proof also gives an algorithm to build the
corresponding hierarchical clustering given such a non-trivial ultrametric in polynomial time. Since
3
this algorithm is relatively straightforward, we refer the reader to the supplementary material for the
details.
Lemma 5. Let T be a hierarchical clustering on V and let dT be the ultrametric on V induced
by cost function (1). Then dT is a non-trivial ultrametric on V . Conversely, let d be a non-trivial
ultrametric on V . Then there is a hierarchical clustering T on V such that for any pair i, j ? V we
have dT (i, j) = |leaves(T [lca (i, j)])| ? 1 = d(i, j). Moreover this hierarchy can be constructed in
time O n3 where |V | = n.
Therefore to find the hierarchical clustering of minimum cost, it suffices to minimize h?, di over
non-trivial ultrametrics d : V ? V ? {0, . . . , n ? 1}. A natural approach is to formulate this
problem as an Integer Linear Program (ILP) and then study Linear Programming (LP) relaxations of
it. We consider the following ILP for this problem that is motivated by [18]. We have the variables
x1ij , . . . , xn?1
for every distinct pair i, j ? V with xtij = 1 if and only if d(i, j) ? t. For any positive
ij
integer n, let [n] := {1, 2, . . . , n}.
min
n?1
X
X
?(i, j)xtij
(ILP-ultrametric)
t=1 {i,j}?E(Kn )
s.t.
xtij ? xt+1
ij
xtij
+
X
xtjk
?
?i, j ? V, t ? [n ? 2]
xtik
(2)
?i, j, k ? V, t ? [n ? 1]
xtij ? 2
?t ? [n ? 1], S ? V, |S| = t + 1
(3)
(4)
i,j?S
?
X
i,j?S
X
X
?
|S|
2?
t
xij ? |S| ?
x
+
1 ? xtij ?
ij
?
? ?t ? [n ? 1], S ? V
xtij = xtji , xtii = 0
xtij
?
? {0, 1}
i,j?S
(5)
i?S
j ?S
/
?i, j ? V, t ? [n ? 1]
?i, j ? V, t ? [n ? 1]
(6)
(7)
Note that constraint (3) is the same as the strong triangle inequality since the variables xtij are in
{0, 1}. Constraint 6 ensures that the ultrametric is symmetric. Constraint 4 ensures the ultrametric
satisfies Condition 1 of non-triviality: for every S ? V of size t + 1 we know that there must be
points i, j ? S such that d(i, j) = d(j, i) ? t or in other words xtij = xtji = 1. Constraint 5 ensures
that the ultrametric
To see this note that the constraint is
Psatisfies ConditionP2 of non-triviality.
t
active only when i,j?S xtij = 0 and i?S,j ?S
/ (1 ? xij ) = 0. In other words d(i, j) ? t ? 1 for
every i, j ? S and S is a maximal such set since if i ? S and j ?
/ S then d(i, j) ? t. Thus S is
an equivalence class under the relation i ? j iff d(i, j) ? t ? 1 and so for every i, j ? S we have
|S|
d(i, j) ? |S| ? 1 or equivalently xij = 0. The ultrametric d represented by a feasible solution xtij is
Pn?1 t
given by d(i, j) = t=1 xij .
Definition 6. For any xtij | t ? [n ? 1], i, j ? V let Et be defined as Et := {i, j} | xtij = 0 .
Note that if xtij is feasible for ILP-ultrametric then Et ? Et+1 for any t since xtij ? xt+1
ij . The sets
n?1
n?1
{Et }t=1 induce a natural sequence of graphs {Gt }t=1 where Gt = (V, Et ) with V being the data
set.
For a fixed t ? {1, . . . , n ? 1} it is instructive to study the combinatorial properties of the so called
layer-t problem, where we fix a choice of t and restrict ourselves to the constraints corresponding to
that particular t. In particular we drop the inter-layer constraint (2), and constraints (3), (4) and (5)
only range over i, j, k ? V and S ? V with t fixed. The following lemma provides a combinatorial
characterization of feasible solutions to the layer-t problem.
Lemma 7. Fix a choice of t ? [n?1]. Let Gt = (V, Et ) be the graph as in Definition 6 corresponding
to a solution xtij to the layer-t problem. Then Gt is a disjoint union of cliques of size ? t. Moreover
this exactly characterizes all feasible solutions to the layer-t ILP.
4
By Lemma 7 the layer-t problem is to find a subset E t ? E(Kn ) of minimum weight under ?, such
that the complement graph Gt = (V, Et ) is a disjoint union of cliques of size ? t. Our algorithmic
approach is to solve an LP relaxation of ILP-ultrametric and then round the solution to get a feasible
solution to ILP-ultrametric. The rounding however proceeds iteratively in a layer-wise manner and so
we need to make sure that the rounded solution satisfies the inter-layer constraints (2) and (5). The
following lemma gives a combinatorial characterization of solutions that satisfy these two constraints.
Lemma 8. For every t ? [n ? 1], let xtij be feasible for the layer-t problem. Let Gt = (V, Et ) be
the graph as in Definition 6 corresponding to xtij , so that by Lemma 7, Gt is a disjoint union of
cliques K1t , . . . , Kltt each of size at most t. Then xtij is feasible for ILP-ultrametric if and only if the
following conditions hold.
Nested cliques For any s ? t every clique Kps for some p ? [ls ] in Gs is a subclique of some clique
Kqt in Gt where q ? [lt ].
Realization If Kpt = s for some s ? t, then Gs contains Kpt as a component clique, i.e., Kqs = Kpt
for some q ? [ls ].
The combinatorial interpretation of the individual layer-t problems allow us to simplify the formulation of ILP-ultrametric by replacing the constraints for sets of a specific size (Constraint 4) by a
global constraint about all sets.
Lemma 9. We may replace Constraint 4 of ILP-ultrametric by the following equivalent constraint
P
t
j?S xij ? |S| ? t, for every t ? [n ? 1], S ? V and i ? S.
4
Rounding an LP relaxation
In this section we consider the following natural LP relaxation for ILP-ultrametric. We keep the
variables xtij for every t ? [n ? 1] and i, j ? V but relax the integrality constraint on the variables.
min
n?1
X
X
?(i, j)xtij
(LP-ultrametric)
t=1 {i,j}?E(Kn )
s.t.
xtij ? xt+1
ij
xtij
xtjk
?i, j ? V, t ? [n ? 2]
xtik
+
?
X
t
xij ? |S| ? t
?i, j, k ? V, t ? [n ? 1]
(8)
(9)
?t ? [n ? 1], S ? V, i ? S
(10)
?i, j ? V, t ? [n ? 1]
(11)
j?S
xtij = xtji , xtii = 0
0?
xtij
?1
?i, j ? V, t ? [n ? 1]
(12)
Note that the LP relaxation LP-ultrametric differs from ILP-ultrametric in not having constraint 5. A
feasible solution xtij to LP-ultrametric induces a sequence {dt }t?[n?1] of distance metrics over V
defined as dt (i, j) := xtij . Constraint 10 enforces an additional restriction on this metric: informally
points in a ?large enough? subset S should be spread apart according to the metric dt . Metrics of
type dt are called spreading metrics and were first studied by [19, 20] in relation to graph partitioning
problems. The following lemma gives a technical interpretation of spreading metrics (see, e.g.,
[19, 20]).
Lemma 10. Let xtij be feasible for LP-ultrametric and for a fixed t ? [n ? 1], let dt be the induced
spreading metric. Let i ? V be an arbitrary vertex and let S ? V be a set containing i such that
?
.
|S| > (1 + ?)t for some ? > 0. Then maxj?S dt (i, j) > 1+?
The following lemma states that we can optimize over LP-ultrametric in polynomial time.
Lemma 11. An optimal solution to LP-ultrametric can be computed in time polynomial in n and
log (maxi,j ?(i, j)).
From now on we will simply refer to a feasible solution of LP-ultrametric by the sequence of
spreading metrics {dt }t?[n?1] it induces. The following definition introduces the notion of an open
5
ball BU (i, r, t) of radius r centered at i ? V according to the metric dt and restricted to the set
U ?V.
Definition 12. Let {dt | t ? [n ? 1]} be the sequence of spreading metrics feasible for LPultrametric. Let U ? V be an arbitrary subset of V . For a vertex i ? U , r ? R, and t ? [n ? 1] we
define the open ball BU (i, r, t) of radius r centered at i as
BU (i, r, t) := {j ? U | dt (i, j) < r} ? U.
If U = V then we denote BU (i, r, t) simply by B (i, r, t).
To round LP-ultrametric to get a feasible solution for ILP-ultrametric, we will use the technique of
sphere growing which was introduced in [33] to show an O(log n) approximation for the maximum
multicommodity flow problem. The basic idea is to grow a ball around a vertex until the expansion of
this ball is below a certain threshold, chop off this ball and declare it as a partition and recurse on
the remaining vertices. Since then this idea has been used by [25, 19, 14] to design approximation
algorithms for various graph partitioning problems. The first step is to associate to every ball
BU (i, r, t) a volume vol (BU (i, r, t)) and a boundary ?BU (i, r, t) so that its expansion is defined.
For any t ? [n ? 1] and U ? V weP
denote by ?tU the value of the layer-t objective for solution dt
U :=
U
restricted to the set U , i.e., ?t
i,j?U ?(i, j)dt (i, j). When U = V we refer to ?t simply by
i<j
?t . Since ? : V ? V ? R?0 , it follows that ?tU ? ?t for any U ? V . We are now ready to define
the volume, boundary and expansion of a ball BU (i, r, t). We use the definition of [19] modified for
restrictions to arbitrary subsets U ? V .
Definition 13. [19] Let U be an arbitrary subset of V . For a vertex i ? U , radius r ? R, and
t ? [n ? 1], let BU (i, r, t) be the ball of radius r as in Definition 12. Then we define its volume as
X
X
?tU
+
?(j, k)dt (j, k) +
?(j, k) (r ? dt (i, j)) .
vol (BU (i, r, t)) :=
n log n
j,k?BU (i,r,t)
j<k
j?BU (i,r,t)
k?B
/ U (i,r,t)
k?U
The boundary of the ball ?BU (i, r, t) is the partial derivative of volume with respect to the radius, i.e.,
U (i,r,t))
?BU (i, r, t) := ? vol(B?r
. The expansion ?(BU (i, r, t)) of the ball BU (i, r, t) is then defined
?BU (i,r,t)
as the ratio of its boundary to its volume, i.e., ? (BU (i, r, t)) := vol(B
.
U (i,r,t))
The following theorem establishes that the rounding procedure of Algorithm 1 ensures that the cliques
in Ct are ?small? and that the cost of the edges removed to form them are not too high.
j
kIt also
n?1
shows that Algorithm 1 can be implemented to run in time polynomial in n. Let m? := 1+? as in
Algorithm 1.
Theorem 14. Let xtij | t ? [m? ], i, j ? V be the output of Algorithm 1 on a feasible solution
{dt }t?[n?1] of LP-ultrametric and any choice of ? ? (0, 1). For any t ? [m? ], xtij is feasible
for
problem and there is a constant c(?) > 0 depending only on ? such that
P the layer-b(1 + ?) tc
t
?(i,
j)x
?
c(?)(log n)?t . Moreover, Algorithm 1 can be implemented to run in time
ij
{i,j}?E(Kn )
polynomial in n.
We are now ready to state the main theorem showing that we can obtain a low cost non-trivial
ultrametric from Algorithm 1. The proof idea of the main theorem is to use the combinatorial
characterization of Lemma 8 to show that the rounded solution is feasible for ILP-ultrametric besides
using Theorem 14 for the individual layer-t guarantees.
Theorem 15. Let {xtij | t ? [m? ] , i, j ? V } be the output of Algorithm 1 on an optimal solution
t
{dt }t?[n?1] of LP-ultrametric for any choice of ? ? (0, 1). Define the sequence yij
for every
t := bt/(1+?)c
t :=
t
t ? [n ? 1] and i, j ? V as yij
xij
if t > 1 + ? and yij
1 otherwise. Then yij
is feasible
Pn?1
P
t
for ILP-ultrametric and satisfies t=1 {i,j}?E(Kn ) ?(i, j)yij ? (2c(?) log n) OPT, where OPT
is the optimal solution to ILP-ultrametric and c(?) is the constant in the statement of Theorem 14.
Lemma 11 and Theorem 15 imply the following corollary where we put everything together to obtain
a hierarchical clustering of V in time polynomial in n with |V | = n. Let T denote the set of all
possible hierarchical clusterings of V .
6
Algorithm 1: Iterative rounding algorithm to find a low cost ultrametric
Input: Data set V , {dt }t?[n?1] : V n
? V , ? > 0, ? : V ?hjV ? ki
R?0
, i, j ? V
Output: A solution set of the form xtij ? {0, 1} | t ? n?1
1+?
j
k
m? ? n?1
1+?
t ? m?
Ct+1 ? {V }
?
? ? 1+?
while t ? 1 do
Ct ? ?
for U ? Ct+1 do
if |U | ? (1 + ?)t then
Ct ? Ct ? {U }
Go to line 1
end
while U 6= ? do
Let i be arbitrary in U
1
U (i,?,t))
Let r ? (0, ?] be s.t. ? (BU (i, r, t)) ? ?
log vol(B
vol(BU (i,0,t))
o
Ct ? Ct ? {BU (i, r, t)}
U ? U \ BU (i, r, t)
end
end
xtij = 1 if i ? U1 ? Ct , j ? U2 ? Ct and U1 6= U2 , else xtij = 0
t?t?1
end
return xtij | t ? [m? ], i, j ? V
Corollary 16. Given a data set V of n points and a similarity function ? : V ? V ?
R?0 , there is an algorithm to compute a hierarchical clustering T of V satisfying cost(T ) ?
O (log n) minT 0 ?T cost(T 0 ) in time polynomial in n and log (maxi,j?V ?(i, j)).
5
Generalized Cost Function
In this section we study the following natural generalization of cost function (1) also introduced
by [16] wherePthe distance between the two points is scaled by a function f : R?0 ? R?0 i.e.,
costf (T ) := {i,j}?E(Kn ) ?(i, j)f (|leaves T [lca(i, j)]|). In order for this cost function to make
sense, f should be strictly increasing and satisfy f (0) = 0. Possible choices for f could be in
{x2 , ex ? 1, log(1 + x)}. The top-down heuristic in [16] finds the optimal hierarchical clustering up
f (n0 )
to an approximation factor of cn log n with cn being defined as cn := 3?n max1?n0 ?n f (dn
0 /3e) and
where ?n is the approximation factor from the Sparsest Cut algorithm used.
A naive approach to solving this problem using the ideas of Algorithm
P 1 would
be to replace
P
n?1 t
the objective function of ILP-ultrametric by {i,j}?E(Kn ) ?(i, j)f
t=1 xij . This makes the
corresponding analogue of LP-ultrametric non-linear however, and for a general ? and f it is not
clear how to compute an optimum solution in polynomial time. Using a small trick, one can still
prove that Algorithm 1 returns a good approximation in this case as the following theorem states. For
more details on the generalized cost function we refer the reader to the supplementary material.
Theorem 17. Let an := maxn0 ?[n] (f (n0 ) ? f (n0 ? 1)). Given a data set V of n points and a
similarity function ? : V ? V ? R?0 , there is an algorithm to compute a hierarchical clustering T of V satisfying costf (T ) ? O (log n + an ) minT 0 ?T costf (T 0 ) in time polynomial in n,
log (maxi,j?V ?(i, j)) and log f (n).
Note that, in this case we pay a price of O (log f (n)) in the running time due to binary search.
7
6
Experiments
Finally, we describe the experiments we performed. We implemented a generalized version of
ILP-ultrametric where one can plug in any strictly
increasing function f satisfying f (0) = 0. For the
sake of exposition, we limited ourselves to x, x2 , log(1 + x), ex ? 1 for the function f . We used
the dual simplex method and separate constraints (9) and (10) to obtain fast computations. For the
similarity function ? we limited ourselves to using cosine similarity ?cos and the Gaussian kernel
?gauss with ? = 1. Since Algorithm 1 requires ? ? 0, in practice we use 1 + ?cos instead of ?cos .
Note that both Ward?s method and the k-means algorithm work on the squared Euclidean distance
and thus need vector representations of the data set. For the linkage based algorithms we use the
same similarity function that we use for Algorithm 1.
We considered synthetic data sets and some data sets from the UCI database [36]. The synthetic data
sets were mixtures of Gaussians in various small dimensional spaces and for some of the larger data
sets we subsampled a smaller number of points uniformly at random for a number of times depending
on the performance of the MIP and LP solver. For a comparison of the cost of the hierarchy returned
by Algorithm 1 and the optimal hierarchy obtained by solving ILP-ultrametric, see the supplementary
material.
To compare the different hierarchical clustering algorithms, we prune the hierarchy to get the best k
flat clusters and measure its error relative to the ground truth. We use the following notion of error
also known as Classification Error that is standard in the literature for hierarchical clustering (see,
e.g., [37]).
Definition 18. Given a proposed clustering h : V ? {1, . . . , k} its classification error relative
to a target clustering g : V ? {1, . . . , k} is denoted by err (g, h) and is defined as err (g, h) :=
min??Sk [Prx?V [h(x) 6= ?(g(x))] .
Figure 1 shows that Algorithm 1 often gives better prunings compared to the other standard clustering
algorithms with respect to this notion of error.
7
Conclusion
In this work we have studied the cost function introduced by [16] for hierarchical clustering of data
under a pairwise similarity function. We have shown a combinatorial characterization of ultrametrics
induced by this cost function leading to an improved approximation algorithm for this problem. It
remains for future work to investigate combinatorial algorithms for this cost function as well as
algorithms for other cost functions of a similar flavor; see supplementary material for a discussion.
1.0
Algorithm 1
Average linkage
Single linkage
Complete linkage
Ward?s method
k-means
0.8
0.6
Error with respect to ground truth
Error with respect to ground truth
1.0
0.4
0.2
0.0
Algorithm 1
Average linkage
Single linkage
Complete linkage
Ward?s method
k-means
0.8
0.6
0.4
0.2
0.0
0
10
20
30
40
50
0
Data sets
10
20
30
40
50
Data sets
Figure 1: Comparison of Algorithm 1 with other algorithms for clustering using 1 + ?cos (left) and
?gauss (right)
8
Acknowledgments
Research reported in this paper was partially supported by NSF CAREER award CMMI-1452463 and
NSF grant CMMI-1333789. The authors thank Kunal Talwar and Mohit Singh for helpful discussions
and anonymous reviewers for helping improve the presentation of this paper.
8
References
[1] Margareta Ackerman, Shai Ben-David, and David Loker. Characterization of linkage-based clustering. In
COLT, pages 270?281. Citeseer, 2010. 2
[2] Nir Ailon and Moses Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. In 46th Annual
IEEE Symposium on Foundations of Computer Science (FOCS?05), pages 73?82. IEEE, 2005. 2
[3] Sanjeev Arora, Satish Rao, and Umesh Vazirani. Expander flows, geometric embeddings and graph
partitioning. Journal of the ACM (JACM), 56(2):5, 2009. 2
[5] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. A discriminative framework for clustering via
similarity functions. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages
671?680. ACM, 2008. 1
[7] Yair Bartal. Graph decomposition lemmas and their role in metric embedding methods. In European
Symposium on Algorithms, pages 89?97. Springer, 2004. 2
[12] Moses Charikar and Vaggos Chatziafratis. Approximate hierarchical clustering via sparsest cut and
spreading metrics. arXiv preprint arXiv:1609.09548, 2016. 2
[14] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitative information. In
Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 524?533.
IEEE, 2003. 6
[16] Sanjoy Dasgupta. A cost function for similarity-based hierarchical clustering. In Daniel Wichs and Yishay
Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing,
STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 118?127. ACM, 2016. ISBN 978-1-45034132-5. doi: 10.1145/2897518.2897527. URL http://doi.acm.org/10.1145/2897518.2897527. 1,
2, 3, 7, 8
[17] Sanjoy Dasgupta and Philip M Long. Performance guarantees for hierarchical clustering. Journal of
Computer and System Sciences, 70(4):555?569, 2005. 2
[18] Marco Di Summa, David Pritchard, and Laura Sanit?. Finding the closest ultrametric. Discrete Applied
Mathematics, 180:70?80, 2015. 2, 4
[19] Guy Even, Joseph Naor, Satish Rao, and Baruch Schieber. Fast approximate graph partitioning algorithms.
SIAM Journal on Computing, 28(6):2187?2214, 1999. 2, 5, 6
[20] Guy Even, Joseph Seffi Naor, Satish Rao, and Baruch Schieber. Divide-and-conquer approximation
algorithms via spreading metrics. Journal of the ACM (JACM), 47(4):585?616, 2000. 2, 5
[22] Joseph Felsenstein and Joseph Felenstein. Inferring phylogenies, volume 2. Sinauer Associates Sunderland,
2004. 2
[23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1.
Springer series in statistics Springer, Berlin, 2001. 1
[25] Naveen Garg, Vijay V Vazirani, and Mihalis Yannakakis. Approximate max-flow min-(multi) cut theorems
and their applications. SIAM Journal on Computing, 25(2):235?251, 1996. 6
[32] Robert Krauthgamer, Joseph Seffi Naor, and Roy Schwartz. Partitioning graphs into balanced components.
In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 942?949.
Society for Industrial and Applied Mathematics, 2009. 2
[33] Tom Leighton and Satish Rao. An approximate max-flow min-cut theorem for uniform multicommodity
flow problems with applications to approximation algorithms. In Foundations of Computer Science, 1988.,
29th Annual Symposium on, pages 422?431. IEEE, 1988. 2, 6
[36] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. 8
[37] Marina Meil?a and David Heckerman. An experimental comparison of model-based clustering methods.
Machine learning, 42(1-2):9?29, 2001. 8
9
| 6325 |@word repository:1 version:1 polynomial:10 leighton:2 open:2 decomposition:3 citeseer:1 multicommodity:2 recursively:1 contains:1 series:1 lichman:1 daniel:1 existing:1 err:2 assigning:1 must:2 partition:2 drop:1 n0:4 leaf:13 characterization:8 provides:1 node:3 org:1 dn:1 constructed:1 symposium:7 descendant:2 prove:3 consists:1 focs:1 fitting:1 qualitative:1 naor:3 manner:1 pairwise:2 mohit:1 inter:2 hardness:1 indeed:1 themselves:1 growing:2 sdp:1 multi:1 precursor:1 chatziafratis:2 increasing:2 solver:1 spain:1 moreover:4 underlying:1 notation:1 lowest:2 kind:1 finding:1 guarantee:4 every:18 exactly:2 scaled:1 schwartz:1 partitioning:10 grant:1 before:1 positive:1 declare:1 meil:1 abuse:1 garg:1 studied:7 equivalence:5 conversely:1 co:4 limited:2 range:3 acknowledgment:1 enforces:1 practice:3 union:3 implement:1 differs:1 procedure:1 kpt:3 projection:2 word:2 induce:1 suggest:1 get:3 ga:2 put:1 context:4 writing:1 restriction:3 equivalent:1 optimize:1 reviewer:1 center:1 straightforward:1 go:1 starting:1 l:2 convex:1 formulate:1 simplicity:1 rule:1 deriving:2 stability:1 embedding:1 notion:4 justification:2 ultrametric:45 hierarchy:10 target:1 yishay:1 programming:2 us:1 kunal:1 associate:2 element:2 trick:1 satisfying:3 roy:1 cut:7 database:1 bottom:1 role:1 preprint:1 capture:1 ensures:4 removed:1 balanced:1 intuition:2 singh:1 solving:2 max1:1 triangle:1 represented:2 various:3 chapter:1 separated:2 distinct:4 fast:2 describe:1 kp:1 doi:2 heuristic:2 supplementary:6 solve:1 larger:1 relax:1 otherwise:1 statistic:1 ward:3 think:1 sequence:5 isbn:1 coming:1 maximal:1 ackerman:1 tu:3 uci:3 realization:1 iff:5 inducing:1 cluster:8 optimum:2 empty:1 bartal:1 ben:1 object:1 depending:2 ij:6 strong:1 recovering:1 implemented:3 radius:5 closely:1 correct:1 centered:2 material:6 everything:1 require:1 suffices:1 generalization:3 fix:2 preliminary:1 anonymous:1 opt:2 yij:5 strictly:2 helping:1 hold:2 marco:1 around:1 considered:2 ground:6 ic:1 algorithmic:1 spreading:10 combinatorial:11 establishes:1 tool:1 weighted:1 gaussian:1 modified:1 rather:3 pn:2 gatech:2 corollary:2 june:1 maria:1 industrial:1 sense:1 helpful:1 typically:1 entire:2 bt:1 hidden:1 sunderland:1 relation:4 ancestor:2 going:1 subroutine:2 dual:1 classification:2 colt:1 denoted:1 santosh:1 aware:1 having:1 represents:3 yannakakis:1 future:1 simplex:1 simplify:1 wep:1 individual:3 maxj:1 subsampled:1 ourselves:3 friedman:1 atlanta:2 investigate:1 introduces:2 mixture:1 recurse:1 behind:2 beforehand:1 integral:1 edge:1 partial:1 tree:15 divide:2 euclidean:1 desired:1 mip:1 theoretical:3 instance:1 rao:7 cost:55 vertex:6 subset:6 uniform:1 rounding:7 satish:4 too:1 reported:1 kn:9 synthetic:4 st:11 siam:3 bu:22 off:1 rounded:3 together:2 sanjeev:1 squared:1 successively:1 containing:1 maxn0:1 guy:2 laura:1 derivative:1 leading:1 return:7 account:1 singleton:1 xtji:3 ultrametrics:14 satisfy:2 explicitly:1 performed:1 root:4 lot:1 analyze:2 characterizes:1 start:1 recover:3 shai:1 k1t:1 minimize:1 became:1 correspond:2 sebastian:2 trevor:1 email:2 definition:12 associated:1 proof:4 di:2 seffi:2 dataset:1 popular:4 fractional:1 attained:1 dt:26 tom:1 improved:1 formulation:3 just:1 until:1 jerome:1 hand:1 replacing:1 quality:1 usa:3 true:1 inductive:2 symmetric:2 iteratively:3 round:6 chop:1 rooted:2 cosine:1 kqs:1 generalized:3 complete:4 balcan:1 wise:1 umesh:1 recently:1 common:2 volume:7 interpretation:2 interpret:1 refer:4 cambridge:1 mathematics:2 pointed:1 similarity:13 gt:8 base:1 closest:2 optimizing:2 apart:1 mint:2 certain:2 inequality:1 binary:1 success:1 seen:1 minimum:2 additional:1 kit:1 prune:1 paradigm:1 venkatesan:1 full:1 technical:2 plug:1 sphere:2 long:2 marina:1 award:1 basic:1 florina:1 metric:19 arxiv:2 kernel:1 addition:1 else:1 median:3 grow:1 archive:1 sure:1 sigact:1 induced:9 expander:1 flow:5 integer:4 granularity:1 intermediate:1 easy:1 enough:1 embeddings:1 fit:1 hastie:1 restrict:1 idea:5 cn:3 motivated:2 triviality:3 guruswami:1 url:2 linkage:14 mihalis:1 returned:1 clear:1 informally:1 extensively:1 induces:3 http:2 xij:8 nsf:2 moses:3 disjoint:3 correctly:1 tibshirani:1 discrete:3 dasgupta:2 vol:6 threshold:1 blum:1 aurko:2 integrality:1 graph:16 relaxation:14 baruch:2 sum:1 roy1:1 enforced:1 run:2 talwar:1 fortieth:1 family:1 ruling:1 reader:2 bound:2 layer:13 ct:10 ki:1 pay:1 g:2 annual:6 precisely:1 constraint:19 n3:1 flat:6 x2:2 sake:2 u1:2 min:6 vempala:1 relatively:1 charikar:4 designated:2 according:3 lca:12 ailon:1 ball:10 felsenstein:1 smaller:2 heckerman:1 partitioned:1 lp:24 joseph:5 intuitively:1 restricted:3 remains:1 turn:3 eventually:1 ilp:23 know:1 end:4 gaussians:1 discretizes:1 hierarchical:39 indirectly:1 yair:1 top:5 clustering:60 remaining:1 running:1 krauthgamer:1 xtij:36 giving:2 build:1 conquer:2 classical:1 society:1 objective:4 planted:1 cmmi:2 distance:4 separate:1 thank:1 berlin:1 philip:1 agglomerative:1 trivial:13 besides:1 ratio:2 margareta:1 equivalently:1 loker:1 phylogenetics:1 taxonomy:1 statement:1 stoc:1 robert:2 design:1 upper:1 behave:1 immediate:1 situation:1 mansour:1 pritchard:1 arbitrary:6 introduced:7 complement:1 pair:10 david:4 barcelona:1 nip:1 proceeds:1 below:1 program:2 kqt:1 including:1 max:3 analogue:1 treated:1 natural:5 representing:1 scheme:2 improve:3 technology:2 imply:2 arora:3 ready:2 naive:1 nir:1 prior:1 literature:1 geometric:1 relative:2 sinauer:1 limitation:1 ingredient:1 triple:1 foundation:3 imposes:2 editor:1 supported:1 allow:1 institute:2 boundary:4 xn:1 world:1 evaluating:1 author:2 far:2 log3:1 vazirani:4 approximate:6 pruning:4 preferred:1 keep:1 clique:9 ml:1 global:1 active:1 conclude:1 discriminative:1 spectrum:1 search:1 iterative:1 sk:1 career:1 obtaining:2 xtik:2 expansion:4 european:1 anthony:1 main:3 spread:1 whole:1 prx:1 georgia:2 inferring:1 sparsest:5 wish:1 pinpoint:1 isye:2 wirth:1 down:7 theorem:12 xt:3 specific:1 showing:5 maxi:4 pokutta:1 restricting:1 avrim:1 dissimilarity:1 subtree:1 flavor:1 vijay:1 tc:1 lt:1 simply:4 jacm:2 twentieth:1 contained:1 partially:1 inapproximability:2 u2:2 springer:3 corresponds:1 truth:6 satisfies:4 nested:1 acm:8 ma:1 presentation:1 vaggos:1 exposition:1 towards:1 replace:2 price:1 feasible:16 hard:1 included:1 uniformly:1 lemma:25 called:2 sanjoy:2 gauss:2 experimental:1 college:1 phylogeny:2 internal:1 naveen:1 evaluate:2 instructive:1 ex:2 |
5,887 | 6,326 | Fast and accurate spike sorting of high-channel count
probes with KiloSort
Marius Pachitariu1 , Nick Steinmetz1 , Shabnam Kadir1
Matteo Carandini1 and Kenneth Harris1
1
UCL, UK {ucgtmpa, }@ucl.ac.uk
Abstract
New silicon technology is enabling large-scale electrophysiological recordings in
vivo from hundreds to thousands of channels. Interpreting these recordings requires scalable and accurate automated methods for spike sorting, which should
minimize the time required for manual curation of the results. Here we introduce
KiloSort, a new integrated spike sorting framework that uses template matching
both during spike detection and during spike clustering. KiloSort models the
electrical voltage as a sum of template waveforms triggered on the spike times,
which allows overlapping spikes to be identified and resolved. Unlike previous
algorithms that compress the data with PCA, KiloSort operates on the raw data
which allows it to construct a more accurate model of the waveforms. Processing
times are faster than in previous algorithms thanks to batch-based optimization
on GPUs. We compare KiloSort to an established algorithm and show favorable
performance, at much reduced processing times. A novel post-clustering merging step based on the continuity of the templates further reduced substantially the
number of manual operations required on this data, for the neurons with nearzero error rates, paving the way for fully automated spike sorting of multichannel
electrode recordings.
1
Introduction
The oldest and most reliable method for recording neural activity involves lowering an electrode
into the brain and recording the local electrical activity around the electrode tip. Action potentials
of single neurons can then be observed as a stereotypical temporal deflection of the voltage, called
a spike waveform. When multiple neurons close to the electrode fire action potentials, their spikes
must be identified and assigned to the correct cell, based on the features of the recorded waveforms, a
process known as spike sorting [1, 2, 3, 4, 5, 6, 7]. Spike sorting is substantially helped by the ability
to simultaneously measure the voltage at multiple closely-space sites in the extracellular medium.
In this case, the recorded waveforms can be seen to have characteristic spatial shapes, determined
by each cell?s location and physiological characteristics. Together, the spatial and temporal shape of
the waveform provides all the information that can be used to assign a given spike to a cell.
New high-density electrodes, currently being tested, can record from several hundred closely-spaced
recording sites. Fast algorithms are necessary to quickly and accurately spike sort tens of millions
of spikes coming from 100 to 1,000 cells, from recordings performed with such next-generation
electrodes in awake, behaving animals. Here we present a new algorithm which provides accurate
spike sorting results, with run times that scale near-linearly with the number of recording channels.
The algorithm takes advantage of the computing capabilities of low-cost commercially available
graphics processing units (GPUs) to enable approximately realtime spike sorting from 384-channel
probes.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a
20
channels
40
60
80
100
120
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
samples (25kHz)
b
correlation of channel noise
c
20
channels
40
60
80
100
120
20
40
60
80
100
120
channels
Figure 1: Data from high-channel count recordings. a, High-pass filtered and channel-whitened
data. Negative peaks are action potentials. b, Example mean waveforms, centered on their peaks. c,
Example cross-correlation matrix across channels (before whitening).
1.1
High-density electrophysiology and structured sources of noise
Next-generation high-density neural probes allow the spikes of most neurons to be recorded on 5 to
50 channels simultaneously (Fig. 1b). This provides a substantial amount of information per spike,
but because other neurons also fire on the same channels, a clustering algorithm is still required to
demix the signals and assign spikes to the correct cluster. Although the dense spacing of channels
provides a large amount of information for each spike, structured sources of noise can still negatively
impact the spike sorting problem. For example, the superimposed waveforms of neurons distant
from the electrode (non-sortable units) add up and constitute a continuous random background (Fig.
1a) against which the features of sortable spikes (Fig. 1b) must be distinguished. In behaving
animals, another major confound is given by the movement of the electrode relative to the tissue,
which creates an apparent inverse movement of the waveform along the channels of the probe.
1.2
Previous work
A traditional approach to spike sorting divides the problem into several stages. In the first stage,
spikes are detected that have maximum amplitudes above a pre-defined threshold and these spikes
are projected into a common low-dimensional space, typically obtained by PCA. In the second stage,
the spikes are clustered in this low-dimensional space using a variety of approaches, such as mixtures
of Gaussians [8] or peak-density based approaches [9]. Some newer algorithms also include a third
stage of template matching in which overlapping spikes are found in the raw data, that may have
been missed in the first detection phase. Finally, a manual stage in a GUI is required for awake
recordings, to manually perform merge and split operations on the imperfect automated results.
Here instead we combine these steps into a single model with a cost function based on the error of
reconstructing the entire raw voltage dataset with the templates of a set of candidate neurons. We
derive approximate inference and learning algorithms that can be successfully applied to very large
channel count data. This approach is related to a previous study [6],
but whereas the previous work scales is impractically slow for recordings with large numbers of
channels, our further modelling and algorithmic innovations have enabled the approach to be used
quickly and accurately on real datasets. We improve the generative model of [] from a spiking
process with continuous L1-penalized traces, to a model of spikes as discrete temporal events. The
approach of [6] does not scale well to high channel count probes, as it requires the solution of a
generic convex optimization problem in high dimensions.
2
2
Model formulation
We start with a generative model of the raw electrical voltage. Unlike previous approaches, we do
not pre-commit to the times of the spikes, nor do we project the waveforms of the spikes to a lowerdimensional PCA space. Both of these steps discard potentially useful information, as we show
below.
2.1
Pre-processing: common average referencing, temporal filtering and spatial whitening
To remove low-frequency fluctuations, such as the local field potential, we high-pass filter each
channel of the raw data at 300 Hz. To diminish the effect of artifacts shared across all channels, we
subtract at each timepoint the median of the signal across all recording sites, an operation known as
common average referencing. This step is best performed after high-pass filtering, because the LFP
magnitude is variable across channels but can be comparable in size to the artifacts.
Finally, we whiten the data in space to remove noise that is correlated across channels (Fig. 1c).
The correlated noise is mostly due to far neurons with small spikes [10], which have a large spatial
spread over the surface of the probe. Since there are very many such neurons at all recording
sites, their noise averages out to have normal statistics with a stereotypical cross-correlation pattern
across channels (Fig. 1c). We distinguish the noise covariance from the covariance of the large,
sortable spikes, by removing the times of putative spikes (detected with a threshold criterion) from
the calculation of the covariance matrix. We use a symmetrical whitening matrix that maintains the
spatial structure of the data, known as ZCA, defined as WZCA = ??1/2 = ED?1/2 E T , where E, D
are the singular vectors and singular values of the estimated covariance matrix ?. To regularize
D, we add a small value to its diagonal. For very large channel counts, estimation of the full
covariance matrix ? is noisy, and we therefore compute the columns of the whitening matrix WZCA
independently for each channel, based on its nearest 32 channels.
2.2
Modelling mean spike waveforms with SVD
When single spike waveforms are recorded across a large number of channels, most channels will
have no signal and only noise. To prevent these channels from biasing the spike sorting problem, previous approaches estimate a mask over those channels with sufficient SNR to be included in a given
spike. To further reduce noise and lower the dimensionality of the data for computational reasons,
the spikes are usually projected into a small number of temporal principal components per channel,
typically three. Here we suggest a different method for simultaneous spatial denoising/masking and
for lowering the dimensionality of spikes, which is based on the observation that mean spike waveforms are very well explained by an SVD decomposition of their spatiotemporal waveform, with
as few as three components (Fig. 2ab). However the spatial and temporal components of the SVD
vary substantially from neuron to neuron, hence the same set of temporal basis functions per channel cannot be used to model all neurons (Fig. 2ab), as typically done in standard approaches. We
analyzed the ability of the classical and proposed methods for dimensionality reduction, and found
that the proposed decomposition can reconstruct waveforms with ?5 times less residual variance
than the classical approach. This allows it to capture small but very distinguishable features of the
spikes, which ultimately can help distinguish between neurons with very similar waveforms.
2.3
Integrated template matching framework
To define a generative model of the electrical recorded voltage, we take advantage of the approximately linear additivity of electrical potentials from different sources in the extracellular medium.
We combine the spike times of all neurons into a Nspikes -dimensional vector s, such that the waveforms start at time samples s + 1. We define the cluster identity of spike k as ?(k), taking values into
the set {1, 2, 3, ..., N }, where N is the total number of neurons. We define the unit-norm waveform
of neuron n as the matrix Kn = Un Wn , of size number of channels by number of sample timepoints
ts (typically 61). The matrix Kn is defined by its low-dimensional deconstruction into three pairs
of spatial and temporal basis functions, Un and Wn , such that the norm of Un Wn is 1. The value of
3
b
residual waveform variance
spatiotemporal PC (private)
a
10 3
10 2
10 1
10 1
raw waveform
temporal PC (common)
10 2
10 3
temporal PC (common)
spatiotemporal PC (private)
Figure 2: Spike reconstruction from three private PCs. a, Four example average waveforms
(black) with their respective reconstruction with three common temporal PCs/channel (blue) and
with reconstruction based on three spatiotemporal PCs (red), private to each spike. The red traces
mostly overlap the black traces. b, Summary of residual waveform variance for all neurons in one
dataset.
the electrical voltage at time t on channel i is defined by
V (i, t) = V0 (i, t) + N (0, )
s(k)?t?ts
V0 (i, t) =
X
xk K?(k) (i, t ? s(k))
k,s(k)<t
xk ? N ??(k) , ??2?(k) ,
(1)
where xk > 0 is the amplitude of spike k. Spike amplitudes in the data can vary significantly even
for spikes from the same neuron, due to factors like burst adaptation and drift. We modelled the
mean and variance of the amplitude variability, with the variance of the distribution scaling with the
square of the mean. ? and are hyperparameters that control the relative scaling with respect to each
other of the reconstruction error and the prior on the amplitude. In practice we set these constant for
all recordings.
This model formulation leads to the following cost function, which we minimize with respect to the
spike times, cluster assignments, amplitudes and templates
2
X xk
2
L(s, x, K, ?) = kV ? V0 k +
?1
(2)
?
??k
k
3
Learning and inference in the model
To optimize the cost function, we alternate between finding the best spike times s, cluster assignments ? and amplitudes x (template matching) and optimizing the template K parametrization with
respect to s, ?, x (template optimization). We initialize the templates using a simple scaled K-means
clustering model, which we in turn initialize with prototypical spikes determined from the data.
After the final spike times and amplitudes have been extracted, we run a final post-optimization
merging algorithm which finds pairs of clusters whose spikes form a single continuous density.
These steps are separately described in detail below.
3.1
Stacked initializations with scaled K-means and prototypical spikes
The density of spikes can vary substantially across the probe, depending on the location of each
recording site in the brain. Initialization of the optimization in a density-dependent way can thus
assign more clusters to regions that require more, relieving the main optimization from the localminima prone problem of moving templates from one part of the probe to another. For the initialization, we thus start by detecting spikes using a threshold rule, and as we load more of the recording
we keep a running subset of prototypical spikes that are sufficiently different from each other by an
L2 norm criterion. We avoid overlapping spikes to be counted as prototypical spikes by enforcing
4
a minimum spatiotemporal peak isolation criterion on the detected spikes. Out of the prototypical
spikes thus detected, we consider a fixed number N which had most matches to other spikes in the
recording.
We then used this initial set of spikes to initialize a scaled K-means algorithm. This algorithm uses
the same cost function described in equation 2, with spike times s fixed to those found by a threshold
criterion. Unlike standard K-means, each spike is allowed to have variable amplitude [11].
3.2
Learning the templates via stochastic batch optimization
The main optimization re-estimates the spike times s at each iteration. The ?online? nature of the
optimization helps to accelerate the algorithm and to avoid local minima. For template optimization
we use a simple running average update rule
?(k)=n
Anew
n (i, t0 )
jn
? (1 ? p)
Aold
n (i, t0 )
jn
+ (1 ? (1 ? p) )
X
V (i, s(k) + t0 ),
(3)
k?batch
where An is the running average waveform for cluster n, jn represents the number of spikes from
cluster n identified in the current batch, and the running average weighs past samples exponentially
with a forgetting constant p. Thus An approximately represents the average of the past p samples
assigned to cluster n. Note that different clusters will therefore update their mean waveforms at
different rates, depending on their number of spikes per batch. Since firing rates vary over two
orders of magnitude in typical recordings (from < 0.5 to 50 spikes/s), the adaptive running average
procedure allows clusters with rare spikes to nonetheless average enough of their spikes to generate
a smooth average template.
Like most clustering algorithms, the model we developed here is prone to non-optimal local minima. We used several techniques to ameliorate this problem. First, we annealed several parameters
during learning, to encourage exploration of the parameter space, which stems from the randomness induced by the stochastic batches. We annealed the forgetting constant p from a small value
(typically 20) at the beginning of the optimization to a large value at the end (typically several hundred). We also anneal from small to large the ratio /?, which controls the relative impact of the
reconstruction term and amplitude bias term in equation 2. Therefore, at the beginning of the optimization, spikes assigned to the same cluster are allowed to have more variable amplitudes. Finally,
we anneal the threshold for spike detection (see below), to allow a greater mismatch between spikes
and the available templates at the beginning of the optimization. As optimization progresses, the
templates become more precise, and spikes increase their projections onto their preferred template,
thus allowing higher thresholds to separate them from the noise.
3.3
Inferring spike times and amplitudes via template matching
The inference step of the proposed model attempts to find the best spike times, cluster assignments
and amplitudes, given a set of templates {Kn }n with low rank-decompositions Kn = Un Wn and
mean amplitudes ?n . The templates are obtained from the running average waveform An , after an
SVD decomposition to give An ? ?n Kn = ?n Un Wn , with kUn Wn k = 1, with Un orthonormal
and Wn orthogonal. The primary roles of the low-rank representation are to guarantee fast inferences
and to regularize the waveform model.
We adopt a parallelized matching pursuit algorithm to iteratively estimate the best fitting templates
and subtract them off from the raw data. In standard matching pursuit, the best fitting template is
identified over the entire batch, its best reconstruction is subtracted from the raw data, and then the
next best fitting template is identified, iteratively until the amount of explained variance falls below a
threshold, which constitutes the stopping criterion. To find the best fitting template, we estimate for
each time t and each template n, the decrease in the cost function obtained by introducing template n
at location t, with the best-fitting amplitude x. This is equivalent to minimizing a standard quadratic
function of the form ax2 ? 2bx + c over the scalar variable x, with a, ?2b and c derived as the
coefficients of x2 , x and 1 from equation 2
a=1+
; b = (Kn ? V )(t) +
; c = ??2n ,
2
??n
??n
5
(4)
where ? represents the operation of temporal filtering (convolution with the time-reversed filter).
Here the filtering is understood as channel-wise filtering followed by a summation of all filtered
traces, which computes the dot product between the template and the voltage snippet starting at
each timepoint t. The decrease in cost dC(n, t) that would occur if a spike of neuron n were added
at time t, and the best x are given by
b
a
b2
dC(n, t) =
?c
a
xbest =
(5)
Computing b requires filtering the data V with all the templates Kn , which amounts to a very
large number of operations, particularly when the data has many channels. However, our lowrank decomposition allows us to reduce the number of operations by a factor of Nchan /Nrank , where
Nchan is the number of channels (typically > 100) and Nrank is the rank of the decomposed template
(typically 3). This follows from the observation that
V ? Kn = V ? (Un Wn )
X
=
(Un (:, j)T ? V ) ? Wn (j, :),
(6)
j
where Un (:, j) is understood as the j-th column of matrix Un and similarly Wn (j, :) is the j-th row
of Wn . We have thus replaced the matrix convolution V ? Kn with a matrix product UnT V and
Nrank one-dimensional convolutions. We implemented the matrix products and filtering operations
efficiently using consumer GPU hardware. Iterative updates of dC after template subtraction can be
obtained quickly using pre-computed cross-template products, as typically done in matching pursuit
[]. The iterative optimization stops when a pre-defined threshold criterion on dC is larger than all
elements of dC.
Due to its greedy nature, matching pursuit can have bad performance at reducing the cost function
in certain problems. It is, however, appropriate to our problem, because spikes are very rare events,
and overlaps are typically small, particularly in high-dimensions over the entire probe. Furthermore,
typical datasets contain millions of spikes and only the simple form of matching pursuit can be efficiently employed. We implemented the simple matching pursuit formulation efficiently on consumer
GPU hardware. Consider the cost improvement matrix dC(n, t). When the largest element of this
matrix is found and the template subtracted, no values of dC need to change except those very close
in time to the fitted template (ts samples away). Thus, instead of finding the global maximum of dC,
we can find local maxima above the threshold criterion, and impose a minimal distance (ts ) between
such local maxima. The identified spikes can then be processed in parallel without affecting each
other?s representations.
We found it unnecessary to iterate the (relatively expensive) parallel matching pursuit algorithm
during the optimization of the templates. We obtained similar templates when we aborted the parallel matching pursuit after the first parallel detection step, without detecting any further overlapping
spikes. To improve the efficiency of the optimization we therefore only apply the full parallel template matching algorithm on the final pass, thus obtaining the overlapping spikes.
4
Benchmarks
First, we timed the algorithm on several large scale datasets. The average run times for 32, 128
and 384 channel recordings were 10, 29 and 140 minutes respectively, on a single GPU-equipped
workstation. These were significant improvements over an established framework called KlustaKwik [8], which needed approximately 480 and 10-20 thousand minutes when ran on 32 and 128
channel datasets on a standard CPU cluster (we did not attempt to run KlustaKwik on 384 channel
recordings).
The significant improvements in speed could have come at the expense of accuracy losses. We
compared Kilosort and Klustakwik on 32 and 128 channel recordings, using a technique known as
?hybrid ground truth? [8]. To create this data, we first selected all the clusters from a recording
that had been previously analysed with KlustaKwik, and curated by a human expert. For each
6
0.8
0.6
0.4
0.2
0
c
miss rates
KlustaKwik
Kilosort
0.8
miss rates
false positive rates
KlustaKwik
Kilosort
0.4
0.2
200
400
600
0
800
total score
0.8
0.6
sorted GT neurons
1
0.6
KlustaKwik
Kilosort
0.4
0.2
200
400
600
0
800
sorted GT neurons
200
400
600
sorted GT neurons
After best merges
e
KlustaKwik
Kilosort
0.8
0.6
0.4
0.2
0
f
miss rates
KlustaKwik
Kilosort
0.8
miss rates
false positive rates
false positive rates
200
400
600
800
0.6
0.4
0
g number of merges for best score
total score
KlustaKwik
Kilosort
0.6
0.4
0.2
200
400
600
KlustaKwik
Kilosort
8
0.2
sorted GT neurons
1
0.8
total score
d
800
sorted GT neurons
0
200
400
600
sorted GT neurons
number of merges
b
false positive rates
total score
a
6
4
2
0
200
400
600
800
sorted GT neurons
Figure 3: Hybrid ground truth performance of proposed (KiloSort) versus established (KlustaKwik) algorithm. a, Distribution of false positive rates. b, Distribution of misses. c, Total score.
def, Same as (abc) after greedy best possible merges. g, Number of merges required to reach best
score.
cluster, we extracted its raw waveform and denoised it with an SVD decomposition (keeping the
top 7 dimensions of variability). We then addded the de-noised waveforms at a different but nearby
spatial location on the probe with a constant channel shift, randomly chosen for each neuron. To
avoid increasing the spike density at any location on the probe, we also subtracted off the denoised
waveform from its original location.
Finally, we ran both KiloSort and KlustaKik on 16 instantiations of the hybrid ground truth. We
matched ground truth cells with clusters identified by the algorithms to find the maximizer of the
score = 1?false positive rate?miss rate, where the false positive rate was normalized by the number
of spikes in the test cluster, and the miss rate was normalized by the number of spikes in the ground
truth cluster. Values close to 1 indicate well-sorted units. Both KiloSort and KlustaKwik performed
well, with KiloSort producing significantly more cells with well-isolated clusters (53% vs 35% units
with scores above 0.9).
We also estimated the best achievable score following manual sorting of the automated results. To
minimize human operator work, algorithms are typically biased towards producing more clusters
than can be expected in the recording, because manually merging an over-split cluster is easier,
less time-consuming and, less error-prone than splitting an over-merged cluster (the latter requires
choosing a carefully defined separation surface). Both KiloSort and KlustaKwik had such a bias,
producing between two and four times more clusters than the expected number of neurons.
To estimate the best achievable score after operator merges, we took advantage of the ground truth
data, and automatically merged together candidate clusters so as to greedily maximize their score.
Final best results as well as the required number of matches are shown in Figure 3defg (KiloSort
vs KlustaKwik 69% vs 60% units with scores above 0.9). The relative performance improvement
of KiloSort is clearly driven by fewer misses (Fig 3e), which are likely due to its ability to detect
overlapping spikes.
5
Extension: post-hoc template merging
We found that we can further reduce human operator work by performing most of the merges in
an automated way. The most common oversplit clusters show remarkable continuity of their spike
densities (Fig. 4). In other words, no discrimination boundary can be identified orthogonal to which
the oversplit cluster appears bimodal. Instead, these clusters arise as a consequence of the algorithm
partitioning clusters with large variance into multiple templates, so as to better explain their total
variance. In KiloSort, we can exploit the fact that the decision boundaries between any two clusters
7
a
b
c
d
e
f
g
h
Figure 4: PC and feature-space projections of two pairs of clusters that should be merged. ae,
Mean waveforms of merge candidates. bf, Spike projections into the top PCs of each candidate
cluster. cg, Template feature projections for the templates corresponding to the candidate clusters.
dh, Discriminant of the feature projections from (cg) (see main text for exact formula).
are in fact planes (which we show below). If two clusters belong to the same neuron, their onedimensional projections in the space orthogonal to the decision boundary will show a continuous
distribution (Fig. 4cd and 4gh), and the clusters can be merged. We use this idea to sequentially
merge any two clusters with continuous distributions in their 2D feature spaces. Note that the best
principal components for each cluster?s main channel are much less indicative of a potential merge
(Fig 4b and 4f).
To see why the decision boundaries in KiloSort are linear, consider two templates Ki and Kj and
consider that we have arrived at the instance of template matching where a spike k needs to be
assigned to one of these two templates. Their respective cost function improvements are dC(i, t) =
a2j
a2i
, and dC(j, t) = , using the convention from equations 4. The decision of assigning spike k to
bi
bj
one or the other of these templates is then equivalent to determining the sign of dC(i, t) ? dC(j, t),
which is a linear discriminant of the feature projections
1
1
sign(dC(i, t) ? dC(j, t)) = sign(ai /bi2 ? aj /bj2 )
(7)
where bi and bj do not depend on the data and ai,j are linear functions of the raw voltage, hence the
decision boundary between any two templates is linear (Fig. 4).
6
Discussion
We have demonstrated here a new framework for spike sorting of high-channel count electrophysiology data, which offers substantial accuracy and speed improvements over previous frameworks,
while also reducing the amount of manual work required to isolate single units. KiloSort is currently enabling spike sorting of up to 1,000 neurons recorded simultaneously in awake animals and
will help to enable the next generation of large-scale neuroscience. The code is available online at
https://github.com/cortex-lab/KiloSort.
8
References
[1] Rodrigo Quian Quiroga. Spike sorting. Current Biology, 22(2):R45?R46, 2012.
[2] Gaute T Einevoll, Felix Franke, Espen Hagen, Christophe Pouzat, and Kenneth D Harris. Towards reliable
spike-train recordings from thousands of neurons with multielectrodes. Current opinion in neurobiology,
22(1):11?17, 2012.
[3] Daniel N Hill, Samar B Mehta, and David Kleinfeld. Quality metrics to accompany spike sorting of
extracellular signals. The Journal of Neuroscience, 31(24):8699?8705, 2011.
[4] Kenneth D Harris, Darrell A Henze, Jozsef Csicsvari, Hajime Hirase, and Gy?orgy Buzs?aki. Accuracy
of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements.
Journal of neurophysiology, 84(1):401?414, 2000.
[5] Jonathan W Pillow, Jonathon Shlens, EJ Chichilnisky, and Eero P Simoncelli. A model-based spike
sorting algorithm for removing correlation artifacts in multi-neuron recordings. PloS one, 8(5):e62123,
2013.
[6] Chaitanya Ekanadham, Daniel Tranchina, and Eero P Simoncelli. A unified framework and method for
automatic neural spike identification. Journal of neuroscience methods, 222:47?55, 2014.
[7] Felix Franke, Robert Pr?opper, Henrik Alle, Philipp Meier, J?org RP Geiger, Klaus Obermayer, and
Matthias HJ Munk. Spike sorting of synchronous spikes from local neuron ensembles. Journal of neurophysiology, 114(4):2535?2549, 2015.
[8] C Rossant, SN Kadir, DFM Goodman, J Schulman, MLD Hunter, AB Saleem, A Grosmark, M Belluscio,
GH Denfield, AS Ecker, AS Tolias, S Solomon, G Buzsaki, M Carandini, and KD Harris. Spike sorting
for large, dense electrode arrays. Nature Neuroscience, 19:634?641, 2016.
[9] Alex Rodriguez and Alessandro Laio. Clustering by fast search and find of density peaks. Science,
344(6191):1492?1496, 2014.
[10] Joana P Neto, Gonc?alo Lopes, Jo?ao Fraz?ao, Joana Nogueira, Pedro Lacerda, Pedro Bai?ao, Arno Aarts,
Alexandru Andrei, Silke Musa, Elvira Fortunato, et al. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset. bioRxiv, page 037937, 2016.
[11] Adam Coates, Andrew Y Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised
feature learning. In International conference on artificial intelligence and statistics, pages 215?223, 2011.
9
| 6326 |@word neurophysiology:2 private:4 achievable:2 norm:3 bf:1 mehta:1 covariance:5 decomposition:6 reduction:1 bai:1 initial:1 score:13 daniel:2 past:2 current:3 com:1 analysed:1 assigning:1 must:2 gpu:3 distant:1 shape:2 remove:2 update:3 v:3 discrimination:1 generative:3 greedy:2 selected:1 fewer:1 intelligence:1 indicative:1 plane:1 oldest:1 xk:4 parametrization:1 beginning:3 record:1 filtered:2 provides:4 detecting:2 location:6 philipp:1 org:1 lacerda:1 along:1 burst:1 become:1 a2j:1 combine:2 fitting:5 introduce:1 forgetting:2 mask:1 expected:2 nor:1 aborted:1 multi:1 brain:2 decomposed:1 automatically:1 cpu:1 equipped:1 increasing:1 spain:1 project:1 matched:1 medium:2 substantially:4 arno:1 developed:1 unified:1 finding:2 guarantee:1 temporal:12 scaled:3 uk:2 control:2 unit:7 partitioning:1 producing:3 before:1 positive:7 understood:2 local:7 felix:2 consequence:1 fluctuation:1 matteo:1 approximately:4 merge:4 black:2 firing:1 initialization:3 xbest:1 bi:2 lfp:1 practice:1 procedure:1 significantly:2 matching:15 projection:7 pre:5 word:1 suggest:1 cannot:1 close:3 onto:1 operator:3 franke:2 optimize:1 equivalent:2 ecker:1 demonstrated:1 annealed:2 starting:1 independently:1 convex:1 splitting:1 rule:2 stereotypical:2 array:1 orthonormal:1 regularize:2 shlens:1 enabled:1 exact:1 us:2 element:2 expensive:1 particularly:2 hagen:1 curated:1 tranchina:1 observed:1 role:1 electrical:6 capture:1 thousand:3 region:1 noised:1 plo:1 movement:2 decrease:2 ran:2 substantial:2 alessandro:1 ultimately:1 unt:1 depend:1 negatively:1 creates:1 efficiency:1 basis:2 resolved:1 accelerate:1 additivity:1 stacked:1 train:1 fast:4 detected:4 artificial:1 klaus:1 choosing:1 apparent:1 whose:1 larger:1 kadir:1 reconstruct:1 ability:3 statistic:2 commit:1 noisy:1 final:4 online:2 hoc:1 triggered:1 advantage:3 matthias:1 ucl:2 reconstruction:6 took:1 coming:1 product:4 adaptation:1 silke:1 buzsaki:1 kv:1 electrode:9 cluster:37 darrell:1 demix:1 adam:1 help:3 derive:1 depending:2 ac:1 andrew:1 nearest:1 lowrank:1 progress:1 implemented:2 involves:1 come:1 indicate:1 convention:1 waveform:30 closely:2 correct:2 alexandru:1 filter:2 stochastic:2 merged:4 centered:1 exploration:1 human:3 enable:2 jonathon:1 opinion:1 munk:1 require:1 assign:3 ao:3 clustered:1 timepoint:2 summation:1 extension:1 quiroga:1 around:1 diminish:1 sufficiently:1 normal:1 ground:6 henze:1 algorithmic:1 bj:2 pouzat:1 alle:1 major:1 vary:4 adopt:1 favorable:1 estimation:1 currently:2 largest:1 create:1 successfully:1 clearly:1 avoid:3 ej:1 hj:1 voltage:9 derived:1 improvement:6 modelling:2 superimposed:1 rank:3 cg:2 zca:1 greedily:1 detect:1 inference:4 dependent:1 stopping:1 integrated:2 typically:11 entire:3 dfm:1 animal:3 spatial:9 initialize:3 field:1 construct:1 ng:1 manually:2 biology:1 represents:3 unsupervised:1 constitutes:1 commercially:1 few:1 randomly:1 simultaneously:3 replaced:1 phase:1 fire:2 gui:1 ab:3 attempt:2 detection:4 mixture:1 analyzed:1 pc:9 accurate:4 encourage:1 necessary:1 respective:2 orthogonal:3 divide:1 chaitanya:1 re:1 timed:1 biorxiv:1 isolated:1 weighs:1 minimal:1 fitted:1 instance:1 column:2 assignment:3 cost:10 introducing:1 ekanadham:1 subset:1 rare:2 snr:1 hundred:3 mld:1 graphic:1 kn:9 spatiotemporal:5 thanks:1 density:10 peak:5 international:1 lee:1 off:2 tip:1 together:2 quickly:3 jo:1 recorded:6 solomon:1 alo:1 expert:1 bx:1 potential:6 relieving:1 de:1 gy:1 b2:1 coefficient:1 ax2:1 performed:3 helped:1 lab:1 red:2 start:3 sort:1 maintains:1 capability:1 parallel:5 masking:1 denoised:2 espen:1 vivo:1 minimize:3 square:1 accuracy:3 variance:8 characteristic:2 efficiently:3 ensemble:1 spaced:1 modelled:1 raw:10 identification:1 accurately:2 hunter:1 tissue:1 randomness:1 simultaneous:2 explain:1 reach:1 aarts:1 manual:5 ed:1 against:1 nonetheless:1 frequency:1 bj2:1 r45:1 workstation:1 stop:1 dataset:3 carandini:1 dimensionality:3 electrophysiological:1 amplitude:15 carefully:1 hajime:1 appears:1 higher:1 nchan:2 formulation:3 done:2 furthermore:1 stage:5 correlation:4 until:1 gonc:1 overlapping:6 maximizer:1 kleinfeld:1 continuity:2 rodriguez:1 artifact:3 aj:1 nrank:3 quality:1 effect:1 contain:1 normalized:2 hence:2 assigned:4 iteratively:2 during:4 aki:1 whiten:1 criterion:7 arrived:1 hill:1 l1:1 interpreting:1 gh:2 wise:1 novel:1 common:7 spiking:1 khz:1 exponentially:1 million:2 belong:1 onedimensional:1 silicon:2 significant:2 measurement:1 honglak:1 ai:2 automatic:1 similarly:1 had:3 dot:1 moving:1 cortex:1 behaving:2 whitening:4 surface:2 add:2 v0:3 gt:7 buzs:1 optimizing:1 driven:1 discard:1 certain:1 christophe:1 seen:1 minimum:3 greater:1 lowerdimensional:1 impose:1 employed:1 parallelized:1 subtraction:1 maximize:1 signal:4 multiple:3 full:2 simoncelli:2 stem:1 smooth:1 faster:1 match:2 calculation:1 cross:3 offer:1 curation:1 post:3 paired:1 impact:2 scalable:1 whitened:1 ae:1 metric:1 iteration:1 bimodal:1 cell:6 background:1 whereas:1 separately:1 spacing:1 affecting:1 median:1 source:3 singular:2 goodman:1 biased:1 unlike:3 recording:26 hz:1 induced:1 isolate:1 accompany:1 validating:1 near:1 split:2 enough:1 wn:11 automated:5 variety:1 iterate:1 isolation:1 identified:8 imperfect:1 reduce:3 idea:1 shift:1 t0:3 synchronous:1 quian:1 pca:3 constitute:1 action:3 useful:1 amount:5 ten:1 hardware:2 processed:1 multichannel:1 reduced:2 generate:1 http:1 coates:1 sign:3 estimated:2 neuroscience:4 per:4 hirase:1 blue:1 discrete:1 four:2 threshold:9 prevent:1 kenneth:3 lowering:2 sum:1 deflection:1 run:4 inverse:1 ameliorate:1 lope:1 missed:1 realtime:1 putative:1 separation:2 decision:5 geiger:1 scaling:2 comparable:1 def:1 ki:1 layer:1 followed:1 distinguish:2 quadratic:1 activity:2 occur:1 alex:1 awake:3 x2:1 nearby:1 speed:2 performing:1 extracellular:4 gpus:2 marius:1 relatively:1 structured:2 alternate:1 kd:1 across:8 reconstructing:1 newer:1 aold:1 explained:2 confound:1 referencing:2 pr:1 equation:4 previously:1 turn:1 count:6 needed:1 end:1 available:3 operation:7 gaussians:1 pursuit:8 probe:11 apply:1 away:1 generic:1 appropriate:1 distinguished:1 subtracted:3 batch:7 paving:1 a2i:1 rp:1 jn:3 original:1 compress:1 top:2 clustering:6 include:1 running:6 exploit:1 polytrodes:1 classical:2 added:1 spike:103 primary:1 traditional:1 diagonal:1 obermayer:1 reversed:1 separate:1 distance:1 discriminant:2 reason:1 enforcing:1 consumer:2 code:1 ratio:1 minimizing:1 innovation:1 kun:1 mostly:2 robert:1 potentially:1 expense:1 trace:4 negative:1 fortunato:1 neto:1 perform:1 allowing:1 neuron:33 observation:2 datasets:4 convolution:3 benchmark:1 enabling:2 snippet:1 denfield:1 t:4 neurobiology:1 variability:2 precise:1 dc:14 drift:1 david:1 pair:3 required:7 csicsvari:1 chichilnisky:1 meier:1 nick:1 merges:7 established:3 barcelona:1 nip:1 below:5 pattern:1 usually:1 mismatch:1 biasing:1 reliable:2 nogueira:1 event:2 overlap:2 bi2:1 hybrid:3 residual:3 improve:2 github:1 technology:1 kj:1 sn:1 text:1 prior:1 l2:1 schulman:1 determining:1 relative:4 fully:1 loss:1 generation:3 prototypical:5 filtering:7 versus:1 remarkable:1 sufficient:1 cd:1 row:1 prone:3 penalized:1 summary:1 keeping:1 bias:2 allow:2 fall:1 template:46 taking:1 rodrigo:1 boundary:5 dimension:3 opper:1 pillow:1 computes:1 adaptive:1 projected:2 counted:1 far:1 approximate:1 preferred:1 laio:1 keep:1 anew:1 global:1 instantiation:1 sequentially:1 symmetrical:1 unnecessary:1 eero:2 consuming:1 tolias:1 kilosort:24 continuous:5 un:10 iterative:2 search:1 why:1 channel:45 nature:3 obtaining:1 orgy:1 anneal:2 did:1 dense:2 spread:1 linearly:1 main:4 intracellular:1 noise:10 hyperparameters:1 arise:1 allowed:2 site:5 fig:12 andrei:1 slow:1 henrik:1 inferring:1 timepoints:1 candidate:5 third:1 removing:2 minute:2 formula:1 load:1 bad:1 physiological:1 tetrode:1 false:7 merging:4 magnitude:2 sorting:19 easier:1 subtract:2 electrophysiology:2 distinguishable:1 likely:1 jozsef:1 scalar:1 pedro:2 truth:6 extracted:2 abc:1 dh:1 harris:3 grosmark:1 identity:1 sorted:8 towards:2 shared:1 change:1 included:1 determined:3 typical:2 operates:1 reducing:2 impractically:1 except:1 denoising:1 principal:2 miss:8 called:2 total:7 pas:4 svd:5 latter:1 jonathan:1 tested:1 correlated:2 |
5,888 | 6,327 | Full-Capacity Unitary Recurrent Neural Networks
Scott Wisdom1? , Thomas Powers1? , John R. Hershey2 , Jonathan Le Roux2 , and Les Atlas1
1
Department of Electrical Engineering, University of Washington
{swisdom, tcpowers, atlas}@uw.edu
2
Mitsubishi Electric Research Laboratories (MERL)
{hershey, leroux}@merl.com
Abstract
Recurrent neural networks are powerful models for processing sequential data,
but they are generally plagued by vanishing and exploding gradient problems.
Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in
previous experiments, the recurrence matrices were restricted to be a product of
parameterized unitary matrices, and an open question remains: when does such a
parameterization fail to represent all unitary matrices, and how does this restricted
representational capacity limit what can be learned? To address this question,
we propose full-capacity uRNNs that optimize their recurrence matrix over all
unitary matrices, leading to significantly improved performance over uRNNs that
use a restricted-capacity recurrence matrix. Our contribution consists of two main
components. First, we provide a theoretical argument to determine if a unitary
parameterization has restricted capacity. Using this argument, we show that a
recently proposed unitary parameterization has restricted capacity for hidden state
dimension greater than 7. Second, we show how a complete, full-capacity unitary
recurrence matrix can be optimized over the differentiable manifold of unitary
matrices. The resulting multiplicative gradient step is very simple and does not
require gradient clipping or learning rate adaptation. We confirm the utility of our
claims by empirically evaluating our new full-capacity uRNNs on both synthetic
and natural data, achieving superior performance compared to both LSTMs and
the original restricted-capacity uRNNs.
1
Introduction
Deep feed-forward and recurrent neural networks have been shown to be remarkably effective in a
wide variety of problems. A primary difficulty in training using gradient-based methods has been
the so-called vanishing or exploding gradient problem, in which the instability of the gradients over
multiple layers can impede learning [1, 2]. This problem is particularly keen for recurrent networks,
since the repeated use of the recurrent weight matrix can magnify any instability.
This problem has been addressed in the past by various means, including gradient clipping [3],
using orthogonal matrices for initialization of the recurrence matrix [4, 5], or by using pioneering
architectures such as long short-term memory (LSTM) recurrent networks [6] or gated recurrent
units [7]. Recently, several innovative architectures have been introduced to improve information
flow in a network: residual networks, which directly pass information from previous layers up in
a feed-forward network [8], and attention networks, which allow a recurrent network to access
past activations [9]. The idea of using a unitary recurrent weight matrix was introduced so that the
gradients are inherently stable and do not vanish or explode [10]. The resulting unitary recurrent
?
Equal contribution
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
neural network (uRNN) is complex-valued and uses a complex form of the rectified linear activation
function. However, this idea was investigated using, as we show, a potentially restricted form of
unitary matrices.
The two main components of our contribution can be summarized as follows:
1) We provide a theoretical argument to determine the smallest dimension N for which any parameterization of the unitary recurrence matrix does not cover the entire set of all unitary matrices. The
argument relies on counting real-valued parameters and using Sard?s theorem to show that the smooth
map from these parameters to the unitary manifold is not onto. Thus, we can show that a previously
proposed parameterization [10] cannot represent all unitary matrices larger than 7 ? 7. Thus, such a
parameterization results in what we refer to as a restricted-capacity unitary recurrence matrix.
2) To overcome the limitations of restricted-capacity parameterizations, we propose a new method for
stochastic gradient descent for training the unitary recurrence matrix, which constrains the gradient to
lie on the differentiable manifold of unitary matrices. This approach allows us to directly optimize a
complete, or full-capacity, unitary matrix. Neither restricted-capacity nor full-capacity unitary matrix
optimization require gradient clipping. Furthermore, full-capacity optimization still achieves good
results without adaptation of the learning rate during training.
To test the limitations of a restricted-capacity representation and to confirm that our full-capacity
uRNN does have practical implications, we test restricted-capacity and full-capacity uRNNs on
both synthetic and natural data tasks. These tasks include synthetic system identification, long-term
memorization, frame-to-frame prediction of speech spectra, and pixel-by-pixel classification of
handwritten digits. Our proposed full-capacity uRNNs generally achieve equivalent or superior
performance on synthetic and natural data compared to both LSTMs [6] and the original restrictedcapacity uRNNs [10].
In the next section, we give an overview of unitary recurrent neural networks. Section 3 presents
our first contribution: the theoretical argument to determine if any unitary parameterization has
restricted-capacity. Section 4 describes our second contribution, where we show how to optimize a
full-capacity unitary matrix. We confirm our results with simulated and natural data in Section 5 and
present our conclusions in Section 6.
2
Unitary recurrent neural networks
The uRNN proposed by Arjovsky et al. [10] consists of the following nonlinear dynamical system
that has real- or complex-valued inputs xt of dimension M , complex-valued hidden states ht of
dimension N , and real- or complex-valued outputs yt of dimension L:
ht =?b (Wht?1 + Vxt )
yt =Uht + c,
(1)
where yt = Re{Uht + c} if the outputs yt are real-valued. The element-wise nonlinearity ? is
(
(|zi | + bi ) |zzii | , if |zi | + bi > 0,
[?b (z)]i =
(2)
0,
otherwise.
Note that this non-linearity consists in a soft-thresholding of the magnitude using the bias vector
b. Hard-thresholding would set the output of ? to zi if |zi | + bi > 0. The parameters of the uRNN
are as follows: W ? U (N ), unitary hidden state transition matrix; V ? CN ?M , input-to-hidden
transformation; b ? RN , nonlinearity bias; U ? CL?N , hidden-to-output transformation; and
c ? CL , output bias.
Arjovsky et al. [10] propose the following parameterization of the unitary matrix W:
Wu (?u ) = D3 R2 F ?1 D2 PR1 FD1 ,
(3)
where D are diagonal unitary matrices, R are Householder reflection matrices [11], F is a discrete
Fourier transform (DFT) matrix, and P is a permutation matrix. The resulting matrix Wu is unitary
because all its component matrices are unitary. This decomposition is efficient because diagonal,
reflection, and permutation matrices are O(N ) to compute, and DFTs can be computed efficiently in
O(N log N ) time using the fast Fourier transform (FFT). The parameter vector ?u consists of 7N
2
real-valued parameters: N parameters for each of the 3 diagonal matrices where Di,i = ej?i and 2N
parameters for each of the 2 Householder reflection matrices, which are real and imaginary values of
u uH
the complex reflection vectors ui : Ri = I ? 2 huii ,uii i .
3
Estimating the representation capacity of structured unitary matrices
In this section, we state and prove a theorem that can be used to determine when any particular unitary
parameterization does not have capacity to represent all unitary matrices. As an application of this
theorem, we show that the parameterization (3) does not have the capacity to cover all N ? N unitary
matrices for N > 7. First, we establish an upper bound on the number of real-valued parameters
required to represent any N ? N unitary matrix. Then, we state and prove our theorem.
Lemma 3.1 The set of all unitary matrices is a manifold of dimension N 2 .
Proof: The set of all unitary matrices is the well-known unitary Lie group U (N ) [12, ?3.4]. A Lie
group identifies group elements with points on a differentiable manifold [12, ?2.2]. The dimension
of the manifold is equal to the dimension of the Lie algebra u, which is a vector space that is the
tangent space at the identity element [12, ?4.5]. For U (N ), the Lie algebra consists of all skewHermitian matrices A [12, ?5.4]. A skew-Hermitian matrix is any A ? CN ?N such that A = ?AH ,
where (?)H is the conjugate transpose. To determine the dimension of U (N ), we can determine the
dimension of u. Because of the skew-Hermitian constraint, the diagonal elements of A are purely
imaginary, which corresponds to N real-valued parameters. Also, since Ai,j = ?A?j,i , the upper and
lower triangular parts of A are parameterized by N (N2?1) complex numbers, which corresponds to
an additional N 2 ? N real parameters. Thus, U (N ) is a manifold of dimension N 2 .
Theorem 3.2 If a family of N ? N unitary matrices is parameterized by P real-valued parameters
for P < N 2 , then it cannot contain all N ? N unitary matrices.
Proof: We consider a family of unitary matrices that is parameterized by P real-valued parameters
through a smooth map g : P(P ) ? U(N 2 ) from the space of parameters P(P ) to the space of all
unitary matrices U(N 2 ). The space P(P ) of parameters is considered as a P -dimensional manifold,
while the space U(N 2 ) of all unitary matrices is an N 2 -dimensional manifold according to lemma 3.1.
Then, if P < N 2 , Sard?s theorem [13] implies that the image g(P) of g is of measure zero in U(N 2 ),
and in particular g is not onto. Since g is not onto, there must exist a unitary matrix W ? U(N 2 )
for which there is no corresponding input P ? P(P ) such that W = g(P). Thus, if P is such that
P < N 2 , the manifold P(P ) cannot represent all unitary matrices in U(N 2 ).
We now apply Theorem 3.2 to the parameterization (3). Note that the parameterization (3) has
P = 7N real-valued parameters. If we solve for N in 7N < N 2 , we get N > 7. Thus, the
parameterization (3) cannot represent all unitary matrices for dimension N > 7.
4
Optimizing full-capacity unitary matrices on the Stiefel manifold
In this section, we show how to get around the limitations of restricted-capacity parameterizations
and directly optimize a full-capacity unitary matrix. We consider the Stiefel manifold of all N ? N
complex-valued matrices whose columns are N orthonormal vectors in CN [14]. Mathematically,
the Stiefel manifold is defined as
VN (CN ) = W ? CN ?N : WH W = IN ?N .
(4)
For any W ? VN (CN ), any matrix Z in the tangent space TW VN (CN ) of the Stiefel manifold
satisfies ZH W ? WH Z = 0 [14]. The Stiefel manifold becomes a Riemannian manifold when
its tangent space is equipped with an inner product. Tagare [14] suggests using the canonical inner
product, given by
1
H
H
hZ1 , Z2 ic = tr Z1 (I ? WW )Z2 .
(5)
2
Under this canonical inner product on the tangent space, the gradient in the Stiefel manifold of the loss
function f with respect to the matrix W is AW, where A = GH W ? WH G is a skew-Hermitian
3
?f
matrix and G with Gi,j = ?W
is the usual gradient of the loss function f with respect to the matrix
i,j
W [14]. Using these facts, Tagare [14] suggests a descent curve along the Stiefel manifold at training
iteration k given by the matrix product of the Cayley transformation of A(k) with the current solution
W(k) :
?1
? (k)
? (k)
(k)
Y (?) = I + A
I? A
W(k) ,
(6)
2
2
H
H
where ? is a learning rate and A(k) = G(k) W(k) ? W(k) G(k) . Gradient descent proceeds by
performing updates W(k+1) = Y(k) (?). Tagare [14] suggests an Armijo-Wolfe search along the
curve to adapt ?, but such a procedure would be expensive for neural network optimization since it
requires multiple evaluations of the forward model and gradients. We found that simply using a fixed
learning rate ? often works well. Also, RMSprop-style scaling of the gradient G(k) by a running
average of the previous gradients? norms [15] before applying the multiplicative step (6) can improve
convergence. The only additional substantial computation required beyond the forward and backward
passes of the network is the N ? N matrix inverse in (6).
5
Experiments
All models are implemented in Theano [16], based on the implementation of restricted-capacity
uRNNs by [10], available from https://github.com/amarshah/complex_RNN. All code to
replicate our results is available from https://github.com/stwisdom/urnn. All models use
RMSprop [15] for optimization, except that full-capacity uRNNs optimize their recurrence matrices
with a fixed learning rate using the update step (6) and optional RMSprop-style gradient normalization.
5.1
Synthetic data
First, we compare the performance of full-capacity uRNNs to restricted-capacity uRNNs and LSTMs
on two tasks with synthetic data. The first task is synthetic system identification, where a uRNN must
learn the dynamics of a target uRNN given only samples of the target uRNN?s inputs and outputs.
The second task is the copy memory problem, in which the network must recall a sequence of data
after a long period of time.
5.1.1
System identification
For the task of system identification, we consider the problem of learning the dynamics of a nonlinear
dynamical system that has the form (1), given a dataset of inputs and outputs of the system. We will
draw a true system Wsys randomly from either a constrained set Wu of restricted-capacity unitary
matrices using the parameterization Wu (?u ) in (3) or from a wider set Wg of restricted-capacity
unitary matrices that are guaranteed to lie outside Wu . We sample from Wg by taking a matrix
product of two unitary matrices drawn from Wu .
We use a sequence length of T = 150, and we set the input dimension M and output dimension L
both equal to the hidden state dimension N . The input-to-hidden transformation V and output-tohidden transformation U are both set to identity, the output bias c is set to 0, the initial state is set
to 0, and the hidden bias b is drawn from a uniform distribution in the range [?0.11, ?0.09]. The
hidden bias has a mean of ?0.1 to ensure stability of the system outputs. Inputs are generated by
sampling T -length i.i.d. sequences of zero-mean, diagonal and unit covariance circular complexvalued Gaussians of dimension N . The outputs are created by running the system (1) forward on the
inputs.
We compare a restricted-capacity uRNN using the parameterization from (3) and a full-capacity
uRNN using Stiefel manifold optimization with no gradient normalization as described in Section 4.
We choose hidden state dimensions N to test critical points predicted by our arguments in Section 3
of Wu (?u ) in (3): N ? {4, 6, 7, 8, 16}. These dimensions are chosen to test below, at, and above the
critical dimension of 7.
For all experiments, the number of training, validation, and test sequences are 20000, 1000, and
1000, respectively. Mean-squared error (MSE) is used as the loss function. The learning rate is 0.001
with a batch size of 50 for all experiments. Both models use the same matrix drawn from Wu as
initialization. To isolate the effect of unitary recurrence matrix capacity, we only optimize W, setting
4
all other parameters to true oracle values. For each method, we report the best test loss over 100
epochs and over 6 random initializations for the optimization.
The results are shown in Table 1. ?Wsys init.? refers to the initialization of the true system unitary
matrix Wsys , which is sampled from either the restricted-capacity set Wu or the wider set Wg .
Table 1: Results for system identification in terms of best normalized MSE. Wu is the set of
restricted-capacity unitary matrices from (3), and Wg is a wider set of unitary matrices.
Wsys init.
Capacity
N =4
N =6
N =7
N =8
N = 16
Wu
Wu
Restricted
Full
4.81e?1
1.28e?1
6.75e?3
3.03e?1
3.53e?1
2.16e?1
3.51e?1
5.04e?2
7.30e?1
1.28e?1
Wg
Wg
Restricted
Full
3.21e?4
8.72e?2
3.36e?1
3.86e?1
3.36e?1
2.62e?1
2.69e?1
7.22e?2
7.60e?1
1.00e?6
Notice that for N < 7, the restricted-capacity uRNN achieves comparable or better performance than
the full-capacity uRNN. At N = 7, the restricted-capacity and full-capacity uRNNs achieve relatively
comparable performance, with the full-capacity uRNN achieving slightly lower error. For N > 7, the
full-capacity uRNN always achieves better performance versus the restricted-capacity uRNN. This
result confirms our theoretical arguments that the restricted-capacity parameterization in (3) lacks the
capacity to model all matrices in the unitary group for N > 7 and indicates the advantage of using a
full-capacity unitary recurrence matrix.
5.1.2
Copy memory problem
The experimental setup follows the copy memory problem from [10], which itself was based on
the experiment from [6]. We consider alternative hidden state dimensions and extend the sequence
lengths to T = 1000 and T = 2000, which are longer than the maximum length of T = 750
considered in previous literature.
In this task, the data is a vector of length T + 20 and consists of elements from 10 categories. The
vector begins with a sequence of 10 symbols sampled uniformly from categories 1 to 8. The next
T ? 1 elements of the vector are the ninth ?blank? category, followed by an element from the tenth
category, the ?delimiter?. The remaining ten elements are ?blank?. The task is to output T + 10 blank
characters followed by the sequence from the beginning of the vector. We use average cross entropy
as the training loss function. The baseline solution outputs the blank category for T + 10 time steps
and then guesses a random symbol uniformly from the first eight categories. This baseline has an
expected average cross entropy of 10Tlog(8)
+20 .
Figure 1: Results of the copy memory problem with sequence lengths of 1000 (left) and 2000 (right).
The full-capacity uRNN converges quickly to a perfect solution, while the LSTM and restrictedcapacity uRNN with approximately the same number of parameters are unable to improve past the
baseline naive solution.
The full-capacity uRNN uses a hidden state size of N = 128 with no gradient normalization. To
match the number of parameters (? 22k), we use N = 470 for the restricted-capacity uRNN, and
N = 68 for the LSTM. The training set size is 100000 and the test set size is 10000. The results
5
of the T = 1000 experiment can be found on the left half of Figure 1. The full-capacity uRNN
converges to a solution with zero average cross entropy after about 2000 training iterations, whereas
the restricted-capacity uRNN settles to the baseline solution of 0.020. The results of the T = 2000
experiment can be found on the right half of Figure 1. The full-capacity uRNN hovers around the
baseline solution for about 5000 training iterations, after which it drops down to zero average cross
entropy. The restricted-capacity again settles down to the baseline solution of 0.010. These results
demonstrate that the full-capacity uRNN is very effective for problems requiring very long memory.
5.2
Speech data
We now apply restricted-capacity and full-capacity uRNNs to real-world speech data and compare
their performance to LSTMs. The main task we consider is predicting the log-magnitude of future
frames of a short-time Fourier transform (STFT). The STFT is a commonly used feature domain
for speech enhancement, and is defined as the Fourier transform of short windowed frames of the
time series. In the STFT domain, a real-valued audio signal is represented as a complex-valued
F ? T matrix composed of T frames that are each composed of F = Nwin /2 + 1 frequency bins,
where Nwin is the duration of the time-domain frame. Most speech processing algorithms use the
log-magnitude of the complex STFT values and reconstruct the processed audio signal using the
phase of the original observations.
The frame prediction task is as follows: given all the log-magnitudes of STFT frames up to time t,
predict the log-magnitude of the STFT frame at time t + 1.We use the TIMIT dataset [17]. According
to common practice [18], we use a training set with 3690 utterances from 462 speakers, a validation
set of 400 utterances, an evaluation set of 192 utterances. Training, validation, and evaluation sets
have distinct speakers. Results are reported on the evaluation set using the network parameters
that perform best on the validation set in terms of the loss function over three training trials. All
TIMIT audio is resampled to 8kHz. The STFT uses a Hann analysis window of 256 samples (32
milliseconds) and a window hop of 128 samples (16 milliseconds).
The LSTM requires gradient clipping during optimization, while the restricted-capacity and fullcapacity uRNNs do not. The hidden state dimensions N of the LSTM are chosen to match the
number of parameters of the full-capacity uRNN. For the restricted-capacity uRNN, we run models
that match either N or number of parameters. For the LSTM and restricted-capacity uRNNs, we
use RMSprop [15] with a learning rate of 0.001, momentum 0.9, and averaging parameter 0.1. For
the full-capacity uRNN, we also use RMSprop to optimize all network parameters, except for the
recurrence matrix, for which we use stochastic gradient descent along the Stiefel manifold using the
update (6) with a fixed learning rate of 0.001 and no gradient normalization.
Table 2: Log-magnitude STFT prediction results on speech data, evaluated using objective and
perceptual metrics (see text for description).
Model
N
# parameters
Valid.
MSE
Eval.
MSE
LSTM
Restricted-capacity uRNN
Restricted-capacity uRNN
Full-capacity uRNN
84
128
158
128
?83k
?67k
?83k
?83k
18.02
15.03
15.06
14.78
18.32
15.78
14.87
15.24
1.95
3.30
3.32
3.57
0.77
0.83
0.83
0.84
1.99
2.36
2.33
2.40
LSTM
Restricted-capacity uRNN
Restricted-capacity uRNN
Full-capacity uRNN
120
192
256
192
?135k
?101k
?135k
?135k
16.59
15.20
15.27
14.56
16.98
15.17
15.63
14.66
2.32
3.31
3.31
3.76
0.79
0.83
0.83
0.84
2.14
2.35
2.36
2.42
LSTM
Restricted-capacity uRNN
Full-capacity uRNN
158
378
256
?200k
?200k
?200k
15.49
15.78
14.41
15.80
16.14
14.45
2.92
3.16
3.75
0.81
0.83
0.84
2.24
2.35
2.38
SegSNR STOI
(dB)
PESQ
Results are shown in Table 2, and Figure 2 shows example predictions of the three types of networks.
Results in Table 2 are given in terms of the mean-squared error (MSE) loss function and several metrics
computed on the time-domain signals, which are reconstructed from the predicted log-magnitude
6
Figure 2: Ground truth and one-frame-ahead predictions of a spectrogram for an example utterance.
For each model, hidden state dimension N is chosen for the best validation MSE. Notice that the
full-capacity uRNN achieves the best detail in its predictions.
and the original phase of the STFT. These time-domain metrics are segmental signal-to-noise ratio
(SegSNR), short-time objective intelligibility (STOI), and perceptual evaluation of speech quality
(PESQ). SegSNR, computed using [19], uses a voice activity detector to avoid measuring SNR in
silent frames. STOI is designed to correlate well with human intelligibility of speech, and takes on
values between 0 and 1, with a higher score indicating higher intelligibility [20]. PESQ is the ITU-T
standard for telephone voice quality testing [21, 22], and is a popular perceptual quality metric for
speech enhancement [23]. PESQ ranges from 1 (bad quality) to 4.5 (no distortion).
Note that full-capacity uRNNs generally perform better than restricted-capacity uRNNs with the
same number of parameters, and both types of uRNN significantly outperform LSTMs.
5.3
Pixel-by-pixel MNIST
As another challenging long-term memory task with natural data, we test the performance of LSTMs
and uRNNs on pixel-by-pixel MNIST and permuted pixel-by-pixel MNIST, first proposed by [5]
and used by [10] to test restricted-capacity uRNNs. For permuted pixel-by-pixel MNIST, the pixels
are shuffled, thereby creating some non-local dependencies between pixels in an image. Since the
MNIST images are 28 ? 28 pixels, resulting pixel-by-pixel sequences are T = 784 elements long.
We use 5000 of the 60000 training examples as a validation set to perform early stopping with a
patience of 5. The loss function is cross-entropy. Weights with the best validation loss are used to
process the evaluation set. The full-capacity uRNN uses RMSprop-style gradient normalization.
Model
N
# parameters
Validation accurary
Evaluation accuracy
Unpermuted
LSTM
LSTM
Restricted-capacity uRNN
Full-capacity uRNN
Full-capacity uRNN
128
256
512
116
512
? 68k
?270k
? 16k
? 16k
?270k
98.1
98.5
97.9
92.7
97.5
97.8
98.2
97.5
92.8
96.9
Permuted
Table 3: Results for unpermuted and permuted pixel-by-pixel MNIST. Classification accuracies are
reported for trained model weights that achieve the best validation loss.
LSTM
LSTM
Restricted-capacity uRNN
Full-capacity uRNN
Full-capacity uRNN
128
256
512
116
512
? 68k
?270k
? 16k
? 16k
?270k
91.7
92.1
94.2
92.2
94.7
91.3
91.7
93.3
92.1
94.1
7
Figure 3: Learning curves for unpermuted pixel-by-pixel MNIST (top panel) and permuted pixel-bypixel MNIST (bottom panel).
Learning curves are shown in Figure 3, and a summary of classification accuracies is shown in Table
3. For the unpermuted task, the LSTM with N = 256 achieves the best evaluation accuracy of
98.2%. For the permuted task, the full-capacity uRNN with N = 512 achieves the best evaluation
accuracy of 94.1%, which is state-of-the-art on this task. Both uRNNs outperform LSTMs on the
permuted case, achieving their best performance after fewer traing epochs and using an equal or lesser
number of trainable parameters. This performance difference suggests that LSTMs are only able
to model local dependencies, while uRNNs have superior long-term memory capabilities. Despite
not representing all unitary matrices, the restricted-capacity uRNN with N = 512 still achieves
impressive test accuracy of 93.3% with only 1/16 of the trainable parameters, outperforming the
full-capacity uRNN with N = 116 that matches number of parameters. This result suggests that
further exploration into the potential trade-off between hidden state dimension N and capacity of
unitary parameterizations is necessary.
6
Conclusion
Unitary recurrent matrices prove to be an effective means of addressing the vanishing and exploding
gradient problems. We provided a theoretical argument to quantify the capacity of constrained
unitary matrices. We also described a method for directly optimizing a full-capacity unitary matrix
by constraining the gradient to lie in the differentiable manifold of unitary matrices. The effect of
restricting the capacity of the unitary weight matrix was tested on system identification and memory
tasks, in which full-capacity unitary recurrent neural networks (uRNNs) outperformed restrictedcapacity uRNNs from [10] as well as LSTMs. Full-capacity uRNNs also outperformed restrictedcapacity uRNNs on log-magnitude STFT prediction of natural speech signals and classification
of permuted pixel-by-pixel images of handwritten digits, and both types of uRNN significantly
outperformed LSTMs. In future work, we plan to explore more general forms of restricted-capacity
unitary matrices, including constructions based on products of elementary unitary matrices such as
Householder operators or Givens operators.
Acknowledgments: We thank an anonymous reviewer for suggesting improvements to our proof in
Section 3 and Vamsi Potluru for helpful discussions. Scott Wisdom and Thomas Powers were funded
by U.S. ONR contract number N00014-12-G-0078, delivery orders 13 and 24. Les Atlas was funded
by U.S. ARO grant W911NF-15-1-0450.
8
References
[1] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult.
IEEE Transactions on Neural Networks, 5(2):157?166, 1994.
[2] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of
learning long-term dependencies. In S. C. Kremer and J. F. Kolen, eds, A field guide to dynamical recurrent
neural networks. IEEE Press, 2001.
[3] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training Recurrent Neural Networks.
arXiv:1211.5063, Nov. 2012.
[4] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in
deep linear neural networks. arXiv:1312.6120, Dec. 2013.
[5] Q. V. Le, N. Jaitly, and G. E. Hinton. A simple way to initialize recurrent networks of rectified linear units.
arXiv:1504.00941, Apr. 2015.
[6] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[7] K. Cho, B. van Merri?nboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation:
Encoder-decoder approaches. arXiv:1409.1259, 2014.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385,
Dec. 2015.
[9] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Advances in
Neural Information Processing Systems (NIPS), pp. 2204?2212, 2014.
[10] M. Arjovsky, A. Shah, and Y. Bengio. Unitary Evolution Recurrent Neural Networks. In International
Conference on Machine Learning (ICML), Jun. 2016.
[11] A. S. Householder. Unitary triangularization of a nonsymmetric matrix. Journal of the ACM, 5(4):339?342,
1958.
[12] R. Gilmore. Lie groups, physics, and geometry: an introduction for physicists, engineers and chemists.
Cambridge University Press, 2008.
[13] A. Sard. The measure of the critical values of differentiable maps. Bulletin of the American Mathematical
Society, 48(12):883?890, 1942.
[14] H. D. Tagare. Notes on optimization on Stiefel manifolds. Technical report, Yale University, 2011.
[15] T. Tieleman and G. Hinton. Lecture 6.5?RmsProp: Divide the gradient by a running average of its recent
magnitude, 2012. COURSERA: Neural Networks for Machine Learning.
[16] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv: 1605.02688, May 2016.
[17] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett. DARPA TIMIT acoustic-phonetic
continous speech corpus. Technical Report NISTIR 4930, National Institute of Standards and Technology,
1993.
[18] A. K. Halberstadt. Heterogeneous acoustic measurements and multiple classifiers for speech recognition.
PhD thesis, Massachusetts Institute of Technology, 1998.
[19] M. Brookes. VOICEBOX: Speech processing toolbox for MATLAB, 2002. [Online]. Available:
http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html.
[20] C. Taal, R. Hendriks, R. Heusdens, and J. Jensen. An algorithm for intelligibility prediction of timefrequency weighted noisy speech. IEEE Trans. on Audio, Speech, and Language Processing, 19(7):2125?
2136, Sep. 2011.
[21] A. Rix, J. Beerends, M. Hollier, and A. Hekstra. Perceptual evaluation of speech quality (PESQ)-a new
method for speech quality assessment of telephone networks and codecs. In Proc. ICASSP, vol. 2, pp.
749?752, 2001.
[22] ITU-T P.862. Perceptual evaluation of speech quality (PESQ): An objective method for end-to-end speech
quality assessment of narrow-band telephone networks and speech codecs, 2000.
[23] P. C. Loizou. Speech Enhancement: Theory and Practice. CRC Press, Boca Raton, FL, Jun. 2007.
9
| 6327 |@word trial:1 timefrequency:1 norm:1 replicate:1 open:1 d2:1 confirms:1 mitsubishi:1 decomposition:1 covariance:1 thereby:1 tr:1 initial:1 series:1 score:1 past:3 imaginary:2 current:1 com:3 z2:2 blank:4 activation:2 must:3 john:1 atlas:2 drop:1 update:3 designed:1 half:2 fewer:1 guess:1 parameterization:16 beginning:1 vanishing:3 short:5 parameterizations:3 pascanu:1 zhang:1 windowed:1 along:3 mathematical:2 consists:6 prove:3 hermitian:3 tlog:1 expected:1 nor:1 pr1:1 equipped:1 window:2 becomes:1 spain:1 estimating:1 linearity:1 begin:1 panel:2 provided:1 what:2 transformation:5 classifier:1 uk:1 unit:3 grant:1 before:1 engineering:1 local:2 limit:1 physicist:1 despite:1 approximately:1 garofolo:1 initialization:4 suggests:5 challenging:1 bi:3 range:2 practical:1 acknowledgment:1 delimiter:1 testing:1 practice:2 digit:2 procedure:1 significantly:3 refers:1 dfts:1 get:2 onto:3 cannot:4 operator:2 complexvalued:1 applying:1 instability:2 memorization:1 optimize:7 equivalent:1 map:3 reviewer:1 yt:4 www:1 attention:2 duration:1 halberstadt:1 sard:3 orthonormal:1 stability:1 merri:1 target:2 construction:1 exact:1 us:5 jaitly:1 element:9 wolfe:1 expensive:1 particularly:1 cayley:1 recognition:2 bottom:1 electrical:1 loizou:1 boca:1 coursera:1 sun:1 trade:1 fiscus:1 substantial:1 ui:1 constrains:1 rmsprop:7 traing:1 dynamic:3 chemist:1 trained:1 algebra:2 purely:1 uh:1 sep:1 darpa:1 icassp:1 various:1 represented:1 distinct:1 fast:2 effective:3 outside:1 whose:1 larger:1 valued:15 solve:1 distortion:1 otherwise:1 wg:6 triangular:1 reconstruct:1 encoder:1 gi:1 codecs:2 transform:4 itself:1 noisy:1 online:1 sequence:9 differentiable:5 advantage:1 net:1 propose:3 aro:1 product:7 adaptation:2 wht:1 achieve:3 representational:1 magnify:1 description:1 convergence:1 enhancement:3 perfect:1 converges:2 wider:3 recurrent:21 ac:1 implemented:1 predicted:2 implies:1 quantify:1 stochastic:2 exploration:1 human:1 saxe:1 settle:2 bin:1 crc:1 require:2 anonymous:1 elementary:1 mathematically:1 segsnr:3 around:2 considered:2 ic:2 ground:1 plagued:1 predict:1 claim:1 achieves:7 early:1 smallest:1 proc:1 outperformed:3 weighted:1 always:1 avoid:2 ej:1 improvement:1 indicates:1 baseline:6 helpful:1 ganguli:1 stopping:1 entire:1 hidden:15 hovers:1 pixel:22 issue:1 classification:4 html:1 development:1 plan:1 constrained:2 art:1 initialize:1 equal:4 field:1 frasconi:2 washington:1 sampling:1 hop:1 icml:1 future:2 report:3 randomly:1 composed:2 national:1 phase:2 geometry:1 hann:1 circular:1 mnih:1 eval:1 evaluation:11 brooke:1 implication:1 necessary:1 orthogonal:1 divide:1 re:1 theoretical:5 merl:2 column:1 soft:1 cover:2 w911nf:1 measuring:1 clipping:4 addressing:1 snr:1 uniform:1 reported:2 dependency:4 aw:1 synthetic:7 cho:1 lstm:14 international:1 contract:1 off:1 physic:1 dmb:1 quickly:1 squared:2 again:1 thesis:1 choose:1 creating:1 american:1 simard:1 leading:1 style:3 suggesting:1 potential:1 kolen:1 summarized:1 multiplicative:2 capability:1 timit:3 contribution:5 accuracy:6 efficiently:1 wisdom:1 handwritten:2 identification:6 kavukcuoglu:1 ren:1 rectified:2 ah:1 detector:1 ed:1 frequency:1 pp:2 keen:1 proof:3 di:1 riemannian:1 sampled:2 dataset:2 popular:1 massachusetts:1 wh:3 recall:1 feed:2 higher:2 hershey:1 improved:1 evaluated:1 furthermore:1 lstms:10 nonlinear:3 assessment:2 lack:1 quality:8 impede:1 effect:2 contain:1 true:3 gilmore:1 requiring:1 evolution:1 normalized:1 shuffled:1 laboratory:1 hendriks:1 during:2 recurrence:13 speaker:2 complete:2 demonstrate:1 lamel:1 reflection:4 gh:1 stiefel:10 image:5 wise:1 recently:3 superior:3 common:1 permuted:8 leroux:1 empirically:1 overview:1 vamsi:1 khz:1 extend:1 he:1 nonsymmetric:1 refer:1 measurement:1 cambridge:1 dft:1 ai:1 stft:10 hp:1 nonlinearity:2 language:1 funded:2 access:1 stable:1 longer:1 impressive:1 segmental:1 recent:1 optimizing:2 schmidhuber:2 phonetic:1 n00014:1 outperforming:1 onr:1 arjovsky:3 greater:1 additional:2 staff:1 spectrogram:1 determine:6 period:1 exploding:3 signal:5 full:46 multiple:3 smooth:2 technical:2 match:4 adapt:1 cross:5 long:10 prediction:8 heterogeneous:1 metric:4 arxiv:6 iteration:3 represent:6 normalization:5 hochreiter:2 dec:2 whereas:1 remarkably:1 addressed:1 pass:1 isolate:1 db:1 bahdanau:1 flow:2 unitary:70 ee:1 counting:1 constraining:1 bengio:5 uht:2 variety:1 fft:1 zi:4 architecture:2 silent:1 inner:3 idea:2 cn:7 lesser:1 pallett:1 expression:1 urnn:47 utility:1 speech:21 matlab:1 deep:3 heess:1 generally:3 ten:1 band:1 processed:1 category:6 mcclelland:1 http:3 outperform:2 exist:1 canonical:2 millisecond:2 notice:2 discrete:1 vol:1 group:5 achieving:3 drawn:3 d3:1 neither:1 tenth:1 ht:2 backward:1 uw:1 unpermuted:4 run:1 inverse:1 parameterized:4 powerful:1 family:2 wu:12 vn:3 draw:1 delivery:1 rix:1 scaling:1 patience:1 comparable:2 layer:2 bound:1 resampled:1 guaranteed:1 followed:2 fl:1 yale:1 oracle:1 activity:1 ahead:1 constraint:1 uii:1 ri:1 explode:1 fourier:4 argument:8 innovative:1 performing:1 mikolov:1 nboer:1 relatively:1 department:1 structured:1 according:2 conjugate:1 describes:1 slightly:1 character:1 tw:1 nwin:2 restricted:45 theano:3 remains:1 previously:1 skew:3 fail:1 end:2 available:3 gaussians:1 apply:2 eight:1 intelligibility:4 batch:1 alternative:1 voice:2 shah:1 thomas:2 original:4 top:1 running:3 include:1 ensure:1 remaining:1 establish:1 society:1 objective:3 triangularization:1 question:2 primary:1 usual:1 diagonal:5 gradient:29 unable:1 thank:1 simulated:1 capacity:94 decoder:1 manifold:22 itu:2 code:1 length:6 ratio:1 setup:1 difficult:1 potentially:1 implementation:1 gated:1 perform:3 upper:2 observation:1 fd1:1 descent:5 optional:1 hinton:2 team:1 frame:11 rn:1 ww:1 ninth:1 householder:4 raton:1 introduced:2 required:2 toolbox:1 trainable:2 optimized:1 z1:1 continous:1 acoustic:2 learned:1 narrow:1 barcelona:1 nip:2 trans:1 address:1 beyond:1 able:1 proceeds:1 dynamical:3 below:1 scott:2 pioneering:1 including:2 memory:10 power:1 critical:3 natural:6 difficulty:3 predicting:1 residual:2 representing:1 improve:3 github:2 technology:2 identifies:1 created:1 jun:2 naive:1 utterance:4 text:1 epoch:2 literature:1 tangent:4 zh:1 python:1 graf:1 loss:10 lecture:1 permutation:2 limitation:3 versus:1 validation:9 thresholding:2 translation:1 summary:1 kremer:1 transpose:1 copy:4 bias:6 allow:1 guide:1 institute:2 wide:1 taking:1 bulletin:1 van:1 overcome:1 dimension:23 curve:4 evaluating:1 transition:1 world:1 valid:1 forward:5 commonly:1 correlate:1 transaction:1 reconstructed:1 nov:1 confirm:3 corpus:1 vxt:1 spectrum:1 search:1 table:7 learn:1 inherently:1 init:2 mse:6 investigated:1 complex:10 cl:2 electric:1 domain:5 voicebox:3 apr:1 main:3 noise:1 n2:1 repeated:1 hz1:1 momentum:1 tohidden:1 lie:8 perceptual:5 vanish:1 theorem:7 down:2 bad:1 xt:1 jensen:1 symbol:2 r2:1 mnist:8 restricting:1 sequential:1 phd:1 magnitude:9 entropy:5 simply:1 explore:1 visual:1 corresponds:2 truth:1 satisfies:1 relies:1 acm:1 tieleman:1 identity:2 fisher:1 hard:1 telephone:3 except:2 uniformly:2 averaging:1 lemma:2 engineer:1 called:1 pas:1 experimental:1 indicating:1 jonathan:1 armijo:1 audio:4 tested:1 |
5,889 | 6,328 | The Generalized Reparameterization Gradient
Francisco J. R. Ruiz
University of Cambridge
Columbia University
Michalis K. Titsias
Athens University of
Economics and Business
David M. Blei
Columbia University
Abstract
The reparameterization gradient has become a widely used method to obtain Monte
Carlo gradients to optimize the variational objective. However, this technique does
not easily apply to commonly used distributions such as beta or gamma without
further approximations, and most practical applications of the reparameterization
gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a
wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions
that weakly depend on the variational parameters. This results in new Monte Carlo
gradients that combine reparameterization gradients and score function gradients.
We demonstrate our approach on variational inference for two complex probabilistic
models. The generalized reparameterization is effective: even a single sample from
the variational distribution is enough to obtain a low-variance gradient.
1
Introduction
Variational inference (vi) is a technique for approximating the posterior distribution in probabilistic
models (Jordan et al., 1999; Wainwright and Jordan, 2008). Given a probabilistic model p.x; z/ of
observed variables x and hidden variables z, the goal of vi is to approximate the posterior p.z j x/,
which is intractable to compute exactly for many models. The idea of vi is to posit a family of
distributions over the latent variables q.zI v/ with free variational parameters v. vi then fits those
parameters to find the member of the family that is closest in Kullback-Leibler (kl) divergence to
the exact posterior, v D arg minv KL.q.zI v/jjp.z j x//. This turns inference into optimization, and
different ways of doing vi amount to different optimization algorithms for solving this problem.
For a certain class of probabilistic models, those where each conditional distribution is in an exponential
family, we can easily use coordinate ascent optimization to minimize the kl divergence (Ghahramani
and Beal, 2001). However, many important models do not fall into this class (e.g., probabilistic neural
networks or Bayesian generalized linear models). This is the scenario that we focus on in this paper.
Much recent research in vi has focused on these difficult settings, seeking effective optimization
algorithms that can be used with any model. This has enabled the application of vi on nonconjugate
probabilistic models (Carbonetto et al., 2009; Paisley et al., 2012; Ranganath et al., 2014; Titsias and
L?zaro-Gredilla, 2014), deep neural networks (Neal, 1992; Hinton et al., 1995; Mnih and Gregor, 2014;
Kingma and Welling, 2014), and probabilistic programming (Wingate and Weber, 2013; Kucukelbir
et al., 2015; van de Meent et al., 2016).
One strategy for vi in nonconjugate models is to obtain Monte Carlo estimates of the gradient of the
variational objective and to use stochastic optimization to fit the variational parameters. Within this
strategy, there have been two main lines of research: black-box variational inference (bbvi) (Ranganath
et al., 2014) and reparameterization gradients (Salimans and Knowles, 2013; Kingma and Welling,
2014). Each enjoys different advantages and limitations.
bbvi expresses the gradient of the variational objective as an expectation with respect to the variational
distribution using the log-derivative trick, also called reinforce or score function method (Glynn,
1990; Williams, 1992). It then takes samples from the variational distribution to calculate noisy
gradients. bbvi is generic?it can be used with any type of latent variables and any model. However,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the gradient estimates typically suffer from high variance, which can lead to slow convergence.
Ranganath et al. (2014) reduce the variance of these estimates using Rao-Blackwellization (Casella
and Robert, 1996) and control variates (Ross, 2002; Paisley et al., 2012; Gu et al., 2016). Other
researchers have proposed further reductions, e.g., through local expectations (Titsias and L?zaroGredilla, 2015) and importance sampling (Ruiz et al., 2016).
The second approach to Monte Carlo gradients of the variational objective is through reparameterization (Price, 1958; Bonnet, 1964; Salimans and Knowles, 2013; Kingma and Welling, 2014; Rezende
et al., 2014). This approach reparameterizes the latent variable z in terms of a set of auxiliary random
variables whose distributions do not depend on the variational parameters (typically, a standard
normal). This facilitates taking gradients of the variational objective because the gradient operator can
be pushed inside the expectation, and because the resulting procedure only requires drawing samples
from simple distributions, such as standard normals. We describe this in detail in Section 2.
Reparameterization gradients exhibit lower variance than bbvi gradients. They typically need only
one Monte Carlo sample to estimate a noisy gradient, which leads to fast algorithms. Further, for some
models, their variance can be bounded (Fan et al., 2015). However, reparameterization is not as generic
as bbvi. It is typically used with Gaussian variational distributions and does not easily generalize to
other common distributions, such as the gamma or beta, without using further approximations. (See
Knowles (2015) for an alternative approach to deal with the gamma distribution.)
We develop the generalized reparameterization (g-rep) gradient, a new method to extend reparameterization to other variational distributions. The main idea is to define an invertible transformation of the
latent variables such that the distribution of the transformed variables is only weakly governed by the
variational parameters. (We make this precise in Section 3.) Our technique naturally combines both
bbvi and reparameterization; it applies to a wide class of nonconjugate models; it maintains the blackbox criteria of reusing variational families; and it avoids approximations. We empirically show in two
probabilistic models?a nonconjugate factorization model and a deep exponential family (Ranganath
et al., 2015)?that a single Monte Carlo sample is enough to build an effective low-variance estimate
of the gradient. In terms of speed, g-rep outperforms bbvi. In terms of accuracy, it outperforms
automatic differentiation variational inference (advi) (Kucukelbir et al., 2016), which considers
Gaussian variational distributions on a transformed space.
2
Background
Consider a probabilistic model p.x; z/, where z denotes the latent variables and x the observations.
We assume that the posterior distribution p.z j x/ is analytically intractable and we wish to apply vi.
We introduce a tractable distribution q.zI v/ to approximate p.z j x/ and minimize the kl divergence
DKL .q.zI v/ k p.z j x// with respect to the variational parameters v. This minimization is equivalently
expressed as the maximization of the so-called evidence lower bound (elbo) (Jordan et al., 1999),
L.v/ D Eq.zIv/ ?log p.x; z/
log q.zI v/? D Eq.zIv/ ?f .z/? C H ?q.zI v/? :
(1)
We denote
f .z/ , log p.x; z/
(2)
to be the model log-joint density and H ?q.zI v/? to be the entropy of the variational distribution. When
the expectation Eq.zIv/ ?f .z/? is analytically tractable, the maximization of the elbo can be carried out
using standard optimization methods. Otherwise, when it is intractable, other techniques are needed.
Recent approaches rely on stochastic optimization to construct Monte Carlo estimates of the gradient
with respect to the variational parameters. Below, we review the two main methods for building such
Monte Carlo estimates: the score function method and the reparameterization trick.
Score function method. A general way to obtain unbiased stochastic gradients is to use the score
function method, also called log-derivative trick or reinforce (Williams, 1992; Glynn, 1990), which
has been recently applied to vi (Paisley et al., 2012; Ranganath et al., 2014; Mnih and Gregor, 2014).
It is based on writing the gradient of the elbo with respect to v as
rv L D Eq.zIv/ ?f .z/rv log q.zI v/? C rv H ?q.zI v/? ;
(3)
and then building Monte Carlo estimates by approximating the expectation with samples from q.zI v/.
The resulting estimator suffers from high variance, making it necessary to apply variance reduction
methods such as control variates (Ross, 2002) or Rao-Blackwellization (Casella and Robert, 1996).
Such variance reduction techniques have been used in bbvi (Ranganath et al., 2014).
2
Reparameterization. The reparameterization trick (Salimans and Knowles, 2013; Kingma and
Welling, 2014) expresses the latent variables z as an invertible function of another set of variables ,
i.e., z D T .I v/, such that the distribution of the new random variables q ./ does not depend on
the variational parameters v. Under these assumptions, expectations with respect to q.zI v/ can be
expressed as Eq.zIv/ ?f .z/? D Eq ./ ?f .T .I v//?, and the gradient with respect to v can be pushed
into the expectation, yielding
h
i
?
rv L D Eq ./ rz f .z/?zDT .Iv/ rv T .I v/ C rv H ?q.zI v/? :
(4)
The assumption here is that the log-joint f .z/ is differentiable. The gradient rz f .z/ depends on the
model, but it can be computed using automatic differentiation tools (Baydin et al., 2015). Monte Carlo
estimates of the reparameterization gradient typically present much lower variance than those based
on Eq. 3. In practice, a single sample from q ./ is enough to obtain a low-variance estimate.1
The reparameterization trick is thus a powerful technique to reduce the variance of the estimator,
but it requires a transformation D T 1 .zI v/ such that q ./ does not depend on the variational
parameters v. For instance, if the variational distribution is Gaussian with mean and covariance ?,
a straightforward transformation consists of standardizing the random variable z, i.e.,
DT
1
.zI ; ?/ D ?
1
2
.z
/:
(5)
This transformation ensures that the (Gaussian) distribution q ./ does not depend on or ?.
For a general variational distribution q.zI v/, Kingma and Welling (2014) discuss three families
of transformations: inverse cumulative density function (cdf), location-scale, and composition.
However, these transformations may not apply in certain cases.2 Notably, none of them apply to the
gamma3 and the beta distributions, although these distributions are often used in vi.
Next, we show how to relax the constraint that the transformed density q ./ must not depend on the
variational parameters v. We follow a standardization procedure similar to the Gaussian case in Eq. 5,
but we allow the distribution of the standardized variable to depend (at least weakly) on v.
3
The Generalized Reparameterization Gradient
We now generalize the reparameterization idea to distributions that, like the gamma or the beta, do
not admit the standard reparameterization trick. We assume that we can efficiently sample from the
variational distribution q.zI v/, and that q.zI v/ is differentiable with respect to z and v. We introduce
a random variable defined by an invertible transformation
DT
1
.zI v/;
and
z D T .I v/;
(6)
1
where we can think of D T .zI v/ as a standardization procedure that attempts to make the
distribution of weakly dependent on the variational parameters v. ?Weakly? means that at least
its first moment does not depend on v. For instance, if is defined to have zero mean, then its first
moment has become independent of v. However, we do not assume that the resulting distribution of
is completely independent of the variational parameters v, and therefore we write it as q .I v/. We use
the distribution q .I v/ in the derivation of g-rep, but we write the final gradient as an expectation
with respect to the original variational distribution q.zI v/, from which we can sample.
More in detail, by the standard change-of-variable technique, the transformed density is
q .I v/ D q .T .I v/I v/ J.; v/;
where
J.; v/ , jdet r T .I v/j ;
(7)
is a short-hand for the absolute value of the determinant of the Jacobian. We first use the transformation
to rewrite the gradient of Eq.zIv/ ?f .z/? in (1) as
Z
rv Eq.zIv/ ?f .z/? D rv Eq .Iv/ ?f .T .I v//? D rv q .I v/f .T .I v// d :
(8)
1 In the literature, there is no formal proof that reparameterization has lower variance than the score function
estimator, except for some simple models (Fan et al., 2015). Titsias and L?zaro-Gredilla (2014) provide some
intuitions, and Rezende et al. (2014) show some benefits of reparameterization in the Gaussian case.
2 The inverse cdf approach sets T 1 .zI v/ to the cdf. This leads to a uniform distribution over on the unit
interval, but it is not practical because the inverse cdf, T .I v/, does not have analytical solution in general. We
develop an approach that does not require computation of (inverse) cdf?s or their derivatives.
3 Composition is only available when it is possible to express the gamma as a sum of exponentials, i.e., its
shape parameter is an integer, which is not generally the case in vi.
3
We now express the gradient as the sum of two terms, which we name grep and gcorr for reasons that
we will explain below. We apply the log-derivative trick and the product rule for derivatives, yielding
Z
rv Eq.zIv/ ?f .z/? D
Z
q .I v/rv f .T .I v// d C q .I v/f .T .I v// rv log q .I v/d ;
?
??
? ?
??
? (9)
gcorr
grep
We rewrite Eq. 9 as an expression that involves expectations with respect to the original variational
distribution q.zI v/ only. For that, we define the following two auxiliary functions that depend on the
transformation T .I v/:
h.I v/ , rv T .I v/;
and
u.I v/ , rv log J.; v/:
After some algebra (see the Supplement for details), we obtain
grep D Eq.zIv/ rz f .z/h T 1 .zI v/I v ;
gcorr D Eq.zIv/ f .z/ rz log q.zI v/h T 1 .zI v/I v C rv log q.zI v/ C u T
(10)
1
.zI v/I v
:
(11)
Thus, we can finally write the full gradient of the elbo as
rv L D grep C gcorr C rv H ?q.zI v/? ;
(12)
rep
Interpretation of the generalized reparameterization gradient. The term g is easily recognizable
as the standard reparameterization gradient, and hence the label ?rep.? Indeed, if the distribution
q .I v/ does not depend on the variational parameters v, then the term rv log q .I v/ in Eq. 9
vanishes, making gcorr D 0. Thus, we may interpret gcorr as a ?correction? term that appears when the
transformed density depends on the variational parameters.
Furthermore, we can recover the score function gradient in Eq. 3 by choosing the identity transformation, z D T .I v/ D . In such case, the auxiliary functions in Eq. 10 become zero because the
transformation does not depend on v, i.e., h.I v/ D 0 and u.I v/ D 0. This implies that grep D 0
and gcorr D Eq.zIv/ ?f .z/rv log q.zI v/?.
Alternatively, we can interpret the g-rep gradient as a control variate of the score function gradient.
For that, we rearrange Eqs. 9 and 11 to express the gradient as
rv Eq.zIv/ ?f .z/? D Eq.zIv/ ?f .z/rv log q.zI v/?
C grep C Eq.zIv/ f .z/ rz log q.zI v/h T
1
.zI v/I v C u T
1
.zI v/I v
;
where the second line is the control variate, which involves the reparameterization gradient.
Transformations. Eqs. 9 and 11 are valid for any transformation T .I v/. However, we may expect
some transformations to perform better than others, in terms of the variance of the resulting estimator.
It seems sensible to search for transformations that make gcorr small, as the reparameterization gradient
grep is known to present low variance in practice under standard smoothness conditions of the log-joint
(Fan et al., 2015).4 Transformations that make gcorr small are such that D T 1 .zI v/ becomes
weakly dependent on the variational parameters v. In the standard reparameterization of Gaussian
random variables, the transformation takes the form in (5), and thus is a standardized version of
z. We mimic this standardization idea for other distributions as well. In particular, for exponential
family distributions, we use transformations of the form (sufficient statistic expected sufficient
statistic)=(scale factor). We present several examples in the next section.
3.1
Examples
For concreteness, we show here some examples of the equations above for well-known probability
distributions. In particular, we choose the gamma, log-normal, and beta distributions.
Gamma distribution. Let q.zI ?; ?/ be a gamma distribution with shape ? and rate ?. We use a
transformation based on standardization of the sufficient statistic log.z/, i.e.,
DT
1
.zI ?; ?/ D
.?/ C log.?/
log.z/
p
4 Techniques
1 .?/
;
such as Rao-Blackwellization could additionally be applied to reduce the variance of gcorr . We
do not apply any such technique in this paper.
4
where ./ denotes the digamma function, and k ./ is its k-th derivative. This ensures that has zero
mean and unit variance, and thus its two first moments do not depend on the variational parameters ?
and ?. We now compute the auxiliary functions in Eq. 10 for the components of the gradient with
respect to ? and ?, which take the form
!
2 .?/
T .I ?; ?/
h? .I ?; ?/ D T .I ?; ?/ p
C 1 .?/ ;
h? .I ?; ?/ D
;
?
2
1 .?/
!
1
2 .?/
2 .?/
C 1 .?/ C
;
u? .I ?; ?/ D
:
u? .I ?; ?/ D
p
2
.?/
?
2
1
1 .?/
The terms grep and gcorr are obtained after substituting these results in Eq. 11. We provide the final
expressions in the Supplement. We remark here that the component of gcorr corresponding to the
derivative with respect to the rate equals zero, i.e., g?corr D 0, meaning that the distribution of does
not depend on the parameter ?. Indeed, we can compute this distribution following Eq. 7 as
p
p
p
e ? .?/
1 .?/
q .I ?; ?/ D
exp ?
exp
;
1 .?/
1 .?/ C .?/
?.?/
where we can verify that it does not depend on ?.
Log-normal distribution. For a log-normal distribution with location and scale , we can
standardize the sufficient statistic log.z/ as
log.z/
D T 1 .zI ; / D
:
This leads to a standard normal distribution on , which does not depend on the variational parameters,
and thus gcorr D 0. The auxiliary function h.I ; /, which is needed for grep , takes the form
h .I ; / D T .I ; /;
h .I ; / D T .I ; /:
Thus, the reparameterization gradient is given in this case by
rep
g
D Eq.zI; / ?zrz f .z/? ;
grep D Eq.zI; / zT 1 .zI ; /rz f .z/ :
This corresponds to advi (Kucukelbir et al., 2016) with a logarithmic transformation over a positive
random variable, since the variational distribution over the transformed variable is Gaussian. For a
general variational distribution, we recover advi if the transformation makes Gaussian.
Beta distribution. For a random variable z Beta.?; ?/, we could rewrite z D z10 =.z10 C z20 / for
z10 Gamma.?; 1/ and z20 Gamma.?; 1/, and apply the gamma reparameterization for z10 and
z20 . Instead, in the spirit of applying standardization directly over z, we define a transformation to
standardize the logit function, logit .z/ , log.z=.1 z// (sum of sufficient statistics of the beta),
.?/ C .?/
:
.?; ?/
This ensures that has zero mean. We can set the denominator to the standard deviation of logit .z/.
However, for larger-scaled models we found better performance with a denominator .?; ?/ that
makes gcorr D 0 for the currently drawn sample z (see the Supplement for details), even though the
variance of the transformed variable is not one in such case.5 The reason is that gcorr suffers from
high variance in the same way as the score function estimator does.
DT
3.2
1
.zI ?; ?/ D
logit .z/
Algorithm
We now present our full algorithm for g-rep. It requires the specification of the variational family
and the transformation T .I v/. Given these, the full procedure is summarized in Algorithm 1. We
use the adaptive step-size sequence proposed by Kucukelbir et al. (2016), which combines rmsprop
(Tieleman and Hinton, 2012) and Adagrad (Duchi et al., 2011). Let gk.i / be the k-th component of the
gradient at the i-th iteration, and k.i / the step-size for that component. We set
q 1
k.i / D i 0:5C C sk.i /
;
with
sk.i / D
.gk.i / /2 C .1
/sk.i 1/ ;
(13)
where we set D 10 16 , D 1,
D 0:1, and we explore several values of . Thus, we update the
variational parameters as v.i C1/ D v.i / C .i / ? rv L, where ??? is the element-wise product.
5 Note
that this introduces some bias since we are ignoring the dependence of .?; ?/ on z.
5
Algorithm 1: Generalized reparameterization gradient algorithm
input : data x, probabilistic model p.x; z/, variational family q.zI v/, transformation z D T .I v/
output : variational parameters v
Initialize v
repeat
Draw a single sample z q.zI v/
Compute the auxiliary functions h T 1 .zI v/I v and u T 1 .zI v/I v (Eq. 10)
Estimate grep and gcorr (Eq. 11, estimate the expectation with one sample)
Compute (analytic) or estimate (Monte Carlo) the gradient of the entropy, rv H ?q.zI v/?
Compute the noisy gradient rv L (Eq. 12)
Set the step-size .i / (Eq. 13) and take a gradient step for v
until convergence
3.3 Related work
A closely related vi method is advi, which also relies on reparameterization and has been incorporated
into Stan (Kucukelbir et al., 2015, 2016). advi applies a transformation to the random variables such
that their support is on the reals and then uses a Gaussian variational posterior on the transformed
space. For instance, random variables that are constrained to be positive are first transformed through
a logarithmic function and then a Gaussian variational approximating distribution is placed on the
unconstrained space. Thus, advi struggles to approximate probability densities with singularities,
which are useful in models where sparsity is appropriate. In contrast, the g-rep method allows
to estimate the gradient for a wider class of variational distributions, including gamma and beta
distributions, which are more appropriate to encode sparsity constraints.
Schulman et al. (2015) also write the gradient in the form given in Eq. 12 to automatically estimate
the gradient through a backpropagation algorithm in the context of stochastic computation graphs.
However, they do not provide additional insight into this equation, do not apply it to general vi, do
not discuss transformations for any distributions, and do not report experiments. Thus, our paper
complements Schulman et al. (2015) and provides an off-the-shelf tool for general vi.
4
Experiments
We apply g-rep to perform mean-field vi on two nonconjugate probabilistic models: the sparse
gamma deep exponential family (def) and a beta-gamma matrix factorization (mf) model. The sparse
gamma def (Ranganath et al., 2015) is a probabilistic model with several layers of latent locations
and latent weights, mimicking the architecture of a deep neural network. The weights of the model
.`/
0
are denoted by wkk
0 , where k and k run over latent components, and ` indexes the layer. The latent
.`/
locations are znk
, where n denotes the observation. We consider Poisson-distributed observations
xnd for each dimension d . Thus, the model is specified
as
!
!
X .1/ .0/
?
z
.`/
znk Gamma ?z ; P .`C1/ .`/ ;
xnd Poisson
znk 0 wk 0 d :
k 0 znk 0 wk 0 k
k0
`
We place gamma priors over the weights wkk
0 with rate 0:3 and shape 0:1, and a gamma prior with
.L/
rate 0:1 and shape 0:1 over the top-layer latent variables znk
. We set the hyperparameter ?z D 0:1,
and we use L D 3 layers with 100, 40, and 15 latent factors.
The second model is a beta-gamma mf model with weights wkd and latent locations znk . We use this
model to describe binary observations xnd , which are modeled as
!!
X
xnd Bernoulli sigmoid
logit .znk / wkd
;
k
where logit .z/ D log.z=.1 z// and sigmoid ./ is the inverse logit function. We place a gamma
prior with shape 0:1 and rate 0:3 over the weights wkd , a uniform prior over the variables znk , and
we use K D 100 latent components.
Datasets. We apply the sparse gamma def on two different databases: (i) the Olivetti database at
AT&T,6 which consists of 400 (320 for training and 80 for test) 64 64 images of human faces in a 8
6 http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
6
Dataset
Olivetti
nips
mnist
Omniglot
g-rep
bbvi
advi
5
0:5
5
5
1
5
5
0:1
1
0:1
0:1
Dataset
Olivetti
nips
mnist
Omniglot
g-rep
bbvi
advi
0:46
0:83
1:09
5:50
12:90
20:95
25:99
0:17
0:25
0:34
4:10
Table 1: (Left) Step-size constant , reported for completeness. (Right) Average time per iteration in
seconds. g-rep is 1-4 times slower than advi but above one order of magnitude faster than bbvi.
bit scale (0 255); and (ii) the collection of papers at the Neural Information Processing Systems
(nips) 2011 conference, which consists of 305 documents and a vocabulary of 5715 effective words
in a bag-of-words format (25% of words from all documents are set aside to form the test set).
We apply the beta-gamma mf on: (i) the binarized mnist data,7 which consists of 28 28 images of
hand-written digits (we use 5000 training and 2000 test images); and (ii) the Omniglot dataset (Lake
et al., 2015), which consists of 105 105 images of hand-written characters from different alphabets
(we select 10 alphabets, with 4425 training images, 1475 test images, and 295 characters).
Evaluation. We apply mean-field vi and we compare g-rep with bbvi (Ranganath et al., 2014) and
advi (Kucukelbir et al., 2016). We do not apply bbvi on the Omniglot dataset due to its computational
complexity. At each iteration, we evaluate the elbo using one sample from the variational distribution,
except for advi, for which we use 20 samples (for the Omniglot dataset, we only use one sample). We
run each algorithm with a fixed computational budget of CPU time. After that time, we also evaluate
the predictive log-likelihood on the test set, averaging over 100 posterior samples. For the nips data,
we also compute the test perplexity (with one posterior sample) every 10 iterations, given by
!
P P
docs
w2doc.d / log p.w j #held out in doc.d //
exp
:
#held out words
Experimental setup. To estimate the gradient, we use 30 Monte Carlo samples for bbvi, and only 1
for advi and g-rep. For bbvi, we use Rao-Blackwellization and control variates (we use a separate
set of 30 samples to estimate the control variates). For bbvi and g-rep, we use beta and gamma
variational distributions, whereas advi uses Gaussian distributions on the transformed space, which
correspond to log-normal or logit-normal distributions on the original space. Thus, only g-rep and
bbvi optimize the same variational family. We parameterize the gamma distribution in terms of
its shape and mean, and the beta in terms of its shape parameters ? and ?. To avoid constrained
optimization, we apply the transformation v 0 D log.exp.v/ 1/ to the variational parameters that are
constrained to be positive and take stochastic gradient steps with respect to v 0 . We use the analytic
gradient of the entropy terms. We implement advi as described by Kucukelbir et al. (2016).
We use the step-size schedule in Eq. 13, and we explore the parameter 2 f0:1; 0:5; 1; 5g. For each
algorithm and each dataset, we report the results based on the value of for which the best elbo was
achieved. We report the values of in Table 1 (left).
Results. We show in Figure 1 the evolution of the elbo as a function of the running time for
three of the considered datasets. bbvi converges slower than the rest of the methods, since each
iteration involves drawing multiple samples and evaluating the log-joint for each of them. advi and
g-rep achieve similar bounds, except for the mnist dataset, for which g-rep provides a variational
approximation that is closer to the posterior, since the elbo is higher. This is because a variational
family with sparse gamma and beta distributions provides a better fit to the data than the variational
family to which advi is limited (log-normal and logit-normal). advi seems to converge slower;
however, we do not claim that advi converges slower than g-rep in general. Instead, the difference
may be due to the different step-sizes schedules that we found to be optimal (see Table 1). We also
report in Table 1 (right) the average time per iteration8 for each method: bbvi is the slowest method,
and advi is the fastest because it involves simulation of Gaussian random variables only.
However, g-rep provides higher likelihood values than advi. We show in Figure 2a the evolution of
the perplexity (lower is better) for the nips dataset, and in Figure 2b the resulting test log-likelihood
(larger is better) for the rest of the considered datasets. In Figure 2b, we report the mean and standard
deviation over 100 posterior samples. advi cannot fit the data as well as g-rep or bbvi because it is
constrained to log-normal and logit-normal variational distributions. These cannot capture sparsity,
7 http://yann.lecun.com/exdb/mnist
8 On
the full mnist with 50; 000 training images, g-rep (advi) took 8:08 (2:04) seconds per iteration.
7
7
6
Olivetti
x 10
?0.5
ELBO
ELBO
G?REP
BBVI
ADVI
?2
0
0.5
1
1.5
2
Time (h)
2.5
3
?1.5
?2
G?REP
BBVI
ADVI
?2.5
3.5
?3
Omniglot
x 10
?1
?1
?1
?1.5
?2.5
7
MNIST
x 10
ELBO
?0.5
0
2
4
Time (h)
6
?2
?3
G?REP
ADVI
?4
8
?5
0
5
10
15
Time (h)
(a) elbo (Olivetti dataset).
(b) elbo (mnist dataset).
(c) elbo (Omniglot dataset).
Figure 1: Comparison between g-rep, bbvi, and advi in terms of the variational objective function.
NIPS
Test perplexity
2500
G?REP
BBVI
ADVI
2000
Dataset
Olivetti
1500
1000
mnist
0
1
2
3
Time (h)
4
5
(a) Perplexity (nips dataset).
6
Omniglot
g-rep
4:48 ? 0:01
0:0932 ? 0:0004
0:0472 ? 0:0001
bbvi
9:74 ? 0:08
0:0888 ? 0:0004
advi
4:63 ? 0:01
0:189 ? 0:009
0:0823 ? 0:0009
(b) Average test log-likelihood per entry xnd .
Figure 2: Comparison between g-rep, bbvi, and advi in terms of performance on the test set. g-rep
outperforms bbvi because the latter has not converged in the allowed time, and it also outperforms
advi because of the variational family it uses.
which is an important feature for the considered models. We can also conclude this by a simple visual
inspection of the fitted models. In the Supplement, we compare images sampled from the g-rep and
the advi posteriors, where we can observe that the latter are more blurry or lack some details.
5
Conclusion
We have introduced the generalized reparameterization gradient (g-rep), a technique to extend the
standard reparameterization gradient to a wider class of variational distributions. As the standard
reparameterization method, our method is applicable to any probabilistic model that is differentiable
with respect to the latent variables. We have demonstrated the generalized reparameterization gradient
on two nonconjugate probabilistic models to fit a variational approximation involving gamma and
beta distributions. We have also empirically shown that a single Monte Carlo sample is enough to
obtain a noisy estimate of the gradient, therefore leading to a fast inference procedure.
Acknowledgments
This project has received funding from the EU H2020 programme (Marie Sk?odowska-Curie grant
agreement 706760), NFS IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009,
DARPA N66001-15-C-4032, Adobe, the John Templeton Foundation, and the Sloan Foundation. The
authors would also like to thank Kriste Krstovski, Alp Kuckukelbir, and Christian A. Naesseth for
helpful comments and discussions.
References
Baydin, A. G., Pearlmutter, B. A., and Radul, A. A. (2015). Automatic differentiation in machine learning: a
survey. arXiv:1502.05767.
Bonnet, G. (1964). Transformations des signaux al?atoires a travers les systemes non lin?aires sans m?moire.
Annals of Telecommunications, 19(9):203?220.
Carbonetto, P., King, M., and Hamze, F. (2009). A stochastic approximation method for inference in probabilistic
graphical models. In Advances in Neural Information Processing Systems.
Casella, G. and Robert, C. P. (1996). Rao-Blackwellisation of sampling schemes. Biometrika, 83(1):81?94.
Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159.
Fan, K., Wang, Z., Beck, J., Kwok, J., and Heller, K. A. (2015). Fast second order stochastic backpropagation for
variational inference. In Advances in Neural Information Processing Systems.
Ghahramani, Z. and Beal, M. J. (2001). Propagation algorithms for variational Bayesian learning. In Advances
in Neural Information Processing Systems.
Glynn, P. W. (1990). Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM,
33(10):75?84.
8
Gu, S., Levine, S., Sutskever, I., and Mnih, A. (2016). MuProp: Unbiased backpropagation for stochastic neural
networks. In International Conference on Learning Representations.
Hinton, G., Dayan, P., Frey, B. J., and Neal, R. M. (1995). The wake-sleep algorithm for unsupervised neural
networks. Science, 268(5214):1158?1161.
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to variational methods
for graphical models. Machine Learning, 37(2):183?233.
Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In International Conference on
Learning Representations.
Knowles, D. A. (2015).
arXiv:1509.01631v1.
Stochastic gradient variational Bayes for gamma approximating distributions.
Kucukelbir, A., Ranganath, R., Gelman, A., and Blei, D. M. (2015). Automatic variational inference in Stan. In
Advances in Neural Information Processing Systems.
Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., and Blei, D. M. (2016). Automatic differentiation variational
inference. arXiv:1603.00788.
Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic
program induction. Science, 350(6266):1332?1338.
Mnih, A. and Gregor, K. (2014). Neural variational inference and learning in belief networks. In International
Conference on Machine Learning.
Neal, R. (1992). Connectionist learning of belief networks. Artificial Intelligence, 56(1):71?113.
Paisley, J. W., Blei, D. M., and Jordan, M. I. (2012). Variational Bayesian inference with stochastic search. In
International Conference on Machine Learning.
Price, R. (1958). A useful theorem for nonlinear devices having Gaussian inputs. IRE Transactions on Information
Theory, 4(2):69?72.
Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Artificial Intelligence and
Statistics.
Ranganath, R., Tang, L., Charlin, L., and Blei, D. M. (2015). Deep exponential families. In Artificial Intelligence
and Statistics.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in
deep generative models. In International Conference on Machine Learning.
Ross, S. M. (2002). Simulation. Elsevier.
Ruiz, F. J. R., Titsias, M. K., and Blei, D. M. (2016). Overdispersed black-box variational inference. In
Uncertainty in Artificial Intelligence.
Salimans, T. and Knowles, D. A. (2013). Fixed-form variational posterior approximation through stochastic
linear regression. Bayesian Analysis, 8(4):837?882.
Schulman, J., Heess, N., Weber, T., and Abbeel, P. (2015). Gradient estimation using stochastic computation
graphs. In Advances in Neural Information Processing Systems.
Tieleman, T. and Hinton, G. (2012). Lecture 6.5-RMSPROP: Divide the gradient by a running average of its
recent magnitude. Coursera: Neural Networks for Machine Learning, 4.
Titsias, M. K. and L?zaro-Gredilla, M. (2014). Doubly stochastic variational Bayes for non-conjugate inference.
In International Conference on Machine Learning.
Titsias, M. K. and L?zaro-Gredilla, M. (2015). Local expectation gradients for black box variational inference.
In Advances in Neural Information Processing Systems.
van de Meent, J.-W., Tolpin, D., Paige, B., and Wood, F. (2016). Black-box policy search with probabilistic
programs. In Artificial Intelligence and Statistics.
Wainwright, M. J. and Jordan, M. I. (2008). Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305.
Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8(3?4):229?256.
Wingate, D. and Weber, T. (2013).
arXiv:1301.1299.
Automated variational inference in probabilistic programming.
9
| 6328 |@word determinant:1 version:1 seems:2 logit:10 simulation:2 covariance:1 moment:3 reduction:3 score:9 document:2 fa8750:1 outperforms:4 com:1 must:1 written:2 john:1 shape:7 analytic:2 christian:1 update:1 aside:1 intelligence:5 generative:1 device:1 inspection:1 short:1 blei:7 provides:4 completeness:1 ire:1 location:5 wkd:3 wierstra:1 become:3 beta:16 consists:5 doubly:1 combine:3 inside:1 recognizable:1 introduce:3 notably:1 expected:1 indeed:2 blackbox:1 blackwellization:4 salakhutdinov:1 automatically:1 cpu:1 becomes:1 spain:1 project:1 bounded:1 transformation:29 differentiation:4 zrz:1 every:1 binarized:1 exactly:1 biometrika:1 scaled:1 uk:1 control:6 unit:2 grant:1 positive:3 local:2 frey:1 struggle:1 encoding:1 black:5 fastest:1 factorization:2 limited:1 grep:10 practical:2 zaro:4 lecun:1 acknowledgment:1 practice:2 minv:1 implement:1 backpropagation:4 digit:1 procedure:5 word:4 cannot:2 operator:1 gelman:2 context:1 applying:1 writing:1 optimize:2 www:1 demonstrated:1 williams:3 economics:1 straightforward:1 focused:1 survey:1 estimator:5 rule:1 insight:1 enabled:1 reparameterization:37 coordinate:1 annals:1 exact:1 programming:2 us:3 agreement:1 trick:7 element:1 standardize:2 trend:1 xnd:5 database:2 observed:1 levine:1 wingate:2 capture:1 parameterize:1 calculate:1 systemes:1 wang:1 ensures:3 coursera:1 eu:1 intuition:1 vanishes:1 rmsprop:2 advi:31 complexity:1 reparameterizations:1 cam:1 weakly:6 depend:15 solving:1 rewrite:3 algebra:1 predictive:1 titsias:7 completely:1 gu:2 easily:4 joint:4 darpa:2 k0:1 derivation:1 alphabet:2 fast:3 effective:4 describe:2 monte:13 artificial:5 choosing:1 zdt:1 whose:1 widely:1 larger:2 drawing:2 elbo:14 otherwise:1 relax:1 statistic:8 think:1 noisy:4 final:2 online:1 beal:2 advantage:1 differentiable:3 sequence:1 analytical:1 took:1 tran:1 product:2 achieve:1 sutskever:1 convergence:2 h2020:1 converges:2 wider:3 develop:2 ac:1 received:1 eq:36 auxiliary:6 involves:4 implies:1 posit:1 closely:1 stochastic:16 human:2 alp:1 require:1 carbonetto:2 abbeel:1 singularity:1 sans:1 correction:1 considered:3 normal:12 exp:4 claim:1 substituting:1 baydin:2 estimation:2 athens:1 bag:1 label:1 currently:1 applicable:1 ross:3 tool:2 minimization:1 gaussian:15 avoid:1 shelf:1 jaakkola:1 encode:1 rezende:3 focus:1 bernoulli:1 likelihood:5 slowest:1 digamma:1 contrast:1 helpful:1 inference:19 elsevier:1 dependent:2 dayan:1 facedatabase:1 typically:5 hidden:1 transformed:11 mimicking:1 arg:1 html:1 ziv:14 denoted:1 constrained:4 initialize:1 equal:1 construct:1 field:2 having:1 sampling:2 unsupervised:1 mimic:1 others:1 report:5 connectionist:2 aire:1 gamma:27 divergence:3 beck:1 attempt:1 moire:1 mnih:4 bbvi:27 evaluation:1 introduces:1 yielding:2 rearrange:1 held:2 closer:1 necessary:1 iv:2 divide:1 fitted:1 instance:3 rao:5 maximization:2 deviation:2 entry:1 uniform:2 reported:1 tolpin:1 density:6 international:6 jdet:1 probabilistic:18 off:1 invertible:4 kucukelbir:9 choose:1 admit:1 derivative:7 leading:1 reusing:1 de:3 standardizing:1 summarized:1 wk:2 nfs:1 sloan:1 vi:17 depends:2 doing:1 hazan:1 recover:2 maintains:1 bayes:3 curie:1 minimize:2 odowska:1 accuracy:1 variance:19 efficiently:1 correspond:1 generalize:2 bayesian:4 none:1 carlo:13 researcher:1 converged:1 explain:1 casella:3 suffers:2 mohamed:1 glynn:3 naturally:1 proof:1 sampled:1 dataset:13 schedule:2 appears:1 higher:2 dt:4 follow:1 nonconjugate:6 attarchive:1 charlin:1 box:5 though:1 furthermore:1 until:1 hand:3 nonlinear:1 lack:1 propagation:1 building:2 name:1 verify:1 unbiased:2 concept:1 evolution:2 analytically:2 hence:1 overdispersed:1 leibler:1 neal:3 deal:1 wkk:2 meent:2 criterion:1 generalized:11 exdb:1 demonstrate:1 pearlmutter:1 duchi:2 weber:3 variational:77 meaning:1 wise:1 recently:1 image:8 funding:1 common:1 sigmoid:2 empirically:2 extend:2 interpretation:1 interpret:2 composition:2 cambridge:1 paisley:4 smoothness:1 automatic:5 unconstrained:1 omniglot:8 specification:1 f0:1 posterior:11 closest:1 recent:3 olivetti:6 perplexity:4 scenario:1 certain:2 n00014:1 rep:34 binary:1 onr:1 travers:1 additional:1 converge:1 ii:3 rv:24 full:4 multiple:1 faster:1 lin:1 dkl:1 adobe:1 involving:1 regression:1 denominator:2 expectation:11 poisson:2 arxiv:4 iteration:6 achieved:1 c1:2 background:1 whereas:1 interval:1 wake:1 rest:2 ascent:1 comment:1 facilitates:1 member:1 spirit:1 jordan:6 integer:1 hamze:1 enough:4 automated:1 fit:6 zi:46 variate:6 architecture:1 reduce:3 idea:4 expression:2 suffer:1 paige:1 remark:1 signaux:1 deep:6 heess:1 generally:1 useful:2 amount:1 tenenbaum:1 dtg:1 http:2 per:4 write:4 hyperparameter:1 express:5 drawn:1 marie:1 n66001:1 v1:1 graph:2 subgradient:1 concreteness:1 sum:3 wood:1 run:2 inverse:5 powerful:1 telecommunication:1 uncertainty:1 extends:1 family:16 place:2 muprop:1 knowles:6 yann:1 lake:2 draw:1 doc:2 pushed:2 bit:1 bound:2 def:3 layer:4 fan:4 sleep:1 constraint:2 speed:1 bonnet:2 format:1 gredilla:4 conjugate:1 character:2 templeton:1 making:2 equation:2 turn:1 discus:2 needed:2 singer:1 tractable:2 available:1 z10:4 apply:15 observe:1 kwok:1 salimans:4 generic:2 appropriate:2 blurry:1 alternative:1 slower:4 rz:6 original:3 denotes:3 michalis:1 standardized:2 top:1 running:2 graphical:3 ghahramani:3 build:1 approximating:4 gregor:3 seeking:1 objective:6 strategy:2 dependence:1 exhibit:1 gradient:65 separate:1 reinforce:2 thank:1 sensible:1 considers:1 reason:2 induction:1 index:1 modeled:1 ratio:1 equivalently:1 difficult:1 setup:1 robert:3 gk:2 zt:1 policy:1 perform:2 observation:4 datasets:3 hinton:4 incorporated:1 precise:1 communication:1 david:1 complement:1 introduced:1 kl:4 specified:1 kingma:6 barcelona:1 nip:8 below:2 sparsity:3 program:2 including:1 belief:2 wainwright:2 business:1 rely:1 scheme:1 stan:2 carried:1 auto:1 columbia:2 review:1 literature:1 schulman:3 prior:4 heller:1 adagrad:1 expect:1 lecture:1 limitation:1 foundation:3 znk:8 sufficient:5 standardization:5 repeat:1 placed:1 free:1 blackwellisation:1 enjoys:1 formal:1 allow:1 bias:1 fall:1 wide:1 taking:1 face:1 saul:1 absolute:1 sparse:4 van:2 benefit:1 distributed:1 dimension:1 vocabulary:1 valid:1 avoids:1 cumulative:1 evaluating:1 author:1 commonly:1 adaptive:2 collection:1 reinforcement:1 programme:1 welling:6 transaction:1 ranganath:12 approximate:4 kullback:1 z20:3 conclude:1 francisco:1 alternatively:1 search:3 latent:16 sk:4 table:4 additionally:1 ignoring:1 complex:1 cl:1 main:3 allowed:1 slow:1 wish:1 exponential:7 governed:1 jacobian:1 ruiz:3 tang:1 theorem:1 evidence:1 intractable:3 mnist:9 corr:1 importance:1 supplement:4 magnitude:2 budget:1 mf:3 entropy:3 logarithmic:2 explore:2 visual:1 expressed:2 applies:2 corresponds:1 tieleman:2 gerrish:1 relies:1 acm:1 cdf:5 conditional:1 goal:1 identity:1 king:1 price:2 change:1 naesseth:1 except:3 averaging:1 called:3 experimental:1 select:1 support:1 latter:2 evaluate:2 |
5,890 | 6,329 | ?Short-Dot?: Computing Large Linear Transforms
Distributedly Using Coded Short Dot Products
Sanghamitra Dutta
Carnegie Mellon University
[email protected]
Viveck Cadambe
Pennsylvania State University
[email protected]
Pulkit Grover
Carnegie Mellon University
[email protected]
Abstract
Faced with saturation of Moore?s law and increasing size and dimension of data,
system designers have increasingly resorted to parallel and distributed computing
to reduce computation time of machine-learning algorithms. However, distributed
computing is often bottle necked by a small fraction of slow processors called
?stragglers? that reduce the speed of computation because the fusion node has
to wait for all processors to complete their processing. To combat the effect
of stragglers, recent literature proposes introducing redundancy in computations
across processors, e.g., using repetition-based strategies or erasure codes. The
fusion node can exploit this redundancy by completing the computation using
outputs from only a subset of the processors, ignoring the stragglers. In this paper,
we propose a novel technique ? that we call ?Short-Dot? ? to introduce redundant
computations in a coding theory inspired fashion, for computing linear transforms
of long vectors. Instead of computing long dot products as required in the original
linear transform, we construct a larger number of redundant and short dot products
that can be computed more efficiently at individual processors. Further, only a
subset of these short dot products are required at the fusion node to finish the
computation successfully. We demonstrate through probabilistic analysis as well
as experiments on computing clusters that Short-Dot offers significant speed-up
compared to existing techniques. We also derive trade-offs between the length of
the dot-products and the resilience to stragglers (number of processors required to
finish), for any such strategy and compare it to that achieved by our strategy.
1
Introduction
This work proposes a coding-theory inspired computation technique for speeding up computing
linear transforms of high-dimensional data by distributing it across multiple processing units that
compute shorter dot products. Our main focus is on addressing the ?straggler effect,? i.e., the problem
of delays caused by a few slow processors that bottleneck the entire computation. To address this
problem, we provide techniques (building on [1] [2] [3] [4] [5]) that introduce redundancy in the
computation by designing a novel error-correction mechanism that allows the size of individual dot
products computed at each processor to be shorter than the length of the input.
The problem of computing linear transforms of high-dimensional vectors is ?the" critical step [6] in
several machine learning and signal processing applications. Dimensionality reduction techniques
such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), taking random
projections, require the computation of short and fat linear transforms on high-dimensional data.
Linear transforms are the building blocks of solutions to various machine learning problems, e.g.,
regression and classification etc., and are also used in acquiring and pre-processing the data through
Fourier transforms, wavelet transforms, filtering, etc. Fast and reliable computation of linear transforms are thus a necessity for low-latency inference [6]. Due to saturation of Moore?s law, increasing
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
speed of computing in a single processor is becoming difficult, forcing practitioners to adopt parallel
processing to speed up computing for ever increasing data dimensions and sizes.
Classical approaches of computing linear transforms across parallel processors, e.g., Block-Striped
Decomposition [7], Fox?s method [8, 7], and Cannon?s method [7], rely on dividing the computational
task equally among all available processors1 without any redundant computation. The fusion node
collects the outputs from each processors to complete the computation and thus has to wait for
all the processors to finish. In almost all distributed systems, a few slow or faulty processors ?
called ?stragglers?[11] ? are observed to delay the entire computation. This unpredictable latency in
distributed systems is attributed to factors such as network latency, shared resources, maintenance
activities, and power limitations. In order to combat with stragglers, cloud computing frameworks
like Hadoop [12] employ various straggler detection techniques and usually reset the task allotted
to stragglers. Forward error-correction techniques offer an alternative approach to deal with this
?straggler effect? by introducing redundancy in the computational tasks across different processors.
The fusion node now requires outputs from only a subset of all the processors to successfully finish.
In this context, the use of preliminary erasure codes dates back to the ideas of algorithmic fault
tolerance [13] [14]. Recently optimized Repetition and Maximum Distance Separable (MDS) [19]
codes have been explored [2] [3] [1] [16] to speed up computations.
We consider the problem of computing Ax where A(M ?N ) is a given matrix and x(N ?1) is a vector
that is input to the computation (M N ). In contrast with [1], which also uses codes to compute
linear transforms in parallel, we allow the size of individual dot products computed at each processor
to be smaller than N , the length of the input. Why might one be interested in computing short
dot products while performing an overall large linear transform? This is because for distributed
digital processors, the computation time is reduced with the number of operations (length of the
dot-products). In Sections 4 and 5, we show that the computation speed-up can be increased beyond
that obtained in [1]. Another interesting example comes from recent work on designing processing
units that exclusively compute dot-products using analog components [17, 18]. These devices are
prone to errors and increased delays in convergence when designed for larger dot products.
To summarize, our main contributions are:
1. To compute Ax for a given matrix A(M ?N ) , we instead compute F x where we construct F(P ?N )
(total no. of processors P > Required no. of dot-products M ) such that each N -length row of F has
at most N (P ? K + M )/P non-zero elements. Because the locations of zeros in a row of F are
known by design, this reduces the complexity of computing dot-products of rows of F with x. Here
K parameterizes the resilience to stragglers: any K of the P dot products of rows of F with x are
sufficient to recover Ax, i.e., any K rows of F can be linearly combined to generate the rows of A.
2. We provide fundamental limits on the trade-off between the length of the dot-products and the
straggler resilience (number of processors to wait for) for any such strategy in Section 3. This
suggests a lower bound on the length of task allotted per processor. However, we believe that these
limits are loose and point to an interesting direction for future work.
3. Assuming exponential tails of service-times at each server (used in [1]), we derive the expected
computation time required by our strategy and compare it to uncoded parallel processing, repetition
strategy and MDS codes [19] (see Fig. 2). Short-Dot offers speed-up by a factor of ?(log(P )) over
P
uncoded, parallel processing and repetition, and nearly by a factor of ?( M
) compared to MDS
codes
when M is linear in P . The strategy out-performs repetition or MDS codes by a factor of
P
? M log(P/M
) when M is sub-linear in P .
4. We provide experimental results showing that Short-Dot is faster than existing strategies.
For the rest of the paper, we define the sparsity of a vector u ? RN as the number of nonzero
PN
elements in the vector, i.e., kuk0 = j=1 I(uj 6= 0). We also assume that P divides N (P N ).
Comparison with existing strategies: Consider the problem of computing a single dot product of
an input vector x ? RN with a pre-specified vector a ? RN . By an ?uncoded? parallel processing
strategy (which includes Block Striped Decomposition [7]), we mean a strategy that does not use
redundancy to overcome delays caused by stragglers. One uncoded strategy is to partition the dot
product into P smaller dot products, where P is the number of available processors. E.g. a can
1
Strassen?s algorithm [9] and its generalizations offer a recursive approach to faster matrix multiplications
over multiple processors, but they are often not preferred because of their high communication cost [10].
2
Figure 1: A dot-product of length N = 12 is being computed parallely using P = 6 processors.
(Left) Uncoded Parallel Processing - Divide into P parts, (Right) Repetition with block partitioning.
be divided into P parts ? constructing P short vectors of sparsity N/P ? with each vector stored
in a different processor (as shown in Fig. 1 left). Only the nonzero values of the vector need to be
stored since the locations of the nonzero values is known apriori at every node. One might expect
the computation time for each processor to reduce by a factor of P . However, now the fusion node
has to wait for all the P processors to finish their computation, and the stragglers can now delay the
entire computation. Can we construct P vectors such that dot products of a subset of them with x
are sufficient to compute ha, xi? A simple coded strategy is Repetition with block partitioning i.e.,
constructing L vectors of sparsity N/L by partitioning the vector of length N into L parts (L < P ),
and repeating the L vectors P/L times so as to obtain P vectors of sparsity N/L as shown in Fig. 1
(right). For each of the L parts of the vector, the fusion node only needs the output of one processor
among all its repetitions. Instead of a single dot-product, if one requires the dot-product of x with M
vectors {a1 , . . . , aM }, one can simply repeat the aforementioned strategy M times.
For multiple dot-products, an alternative repetition-based strategy is to compute M dot products
P/M times in parallel at different processors. Now we only have to wait for at least one processor
corresponding to each of the M vectors to finish. Improving upon repetition, it is shown in [1]
that an (P, M )-MDS code allows constructing P coded vectors such that any M of P dot-products
can be used to reconstruct all the M original vectors (see Fig. 2b). This strategy is shown, both
experimentally and theoretically, to perform better than repetition and uncoded strategies.
(a) Uncoded Parallel Processing
(b) Using MDS codes
(c) Using Short-Dot
Figure 2: Different strategies of parallel processing: Here M = 3 dot-products of length N = 12 are
being computed using P = 6 processors.
Can we go beyond MDS codes? MDS codes-based strategies require N -length dot-products to be
computed on each processor. Short-Dot instead constructs P vectors of sparsity s (less than N ), such
that the dot product of x with any K (? M ) out of these P short vectors is sufficient to compute the
dot-product of x with all the M given vectors (see Fig. 2c). Compared to MDS Codes, Short-Dot
waits for some more processors (since K ? M ), but each processor computes a shorter dot product.
We also propose Short-MDS, an extension of the MDS codes-based strategy in [1] to create short
dot-products of length s, through block partitioning, and compare it with Short-Dot. In regimes
where Ns is an integer, Short-MDS may be viewed as a special case of Short-Dot. But when Ns is not
an integer, Short-MDS has to wait for more processors in worst case than Short-Dot for the same
sparsity s, as discussed in Remark 1 in Section 2.
2
Our coded parallelization strategy: Short-Dot
In this section, we provide our strategy of computing the linear transform Ax where x ? RN
is the input vector and A(M ?N ) = [a1 , a2 , . . . , aM ]T is a given matrix. Short-Dot constructs a
3
Figure 3: Short-Dot: Distributes short dot-products over P parallel processors, such that outputs from
any K out of P processors are sufficient to compute successfully.
P ? N matrix F = [f1 , f2 , . . . , fP ]T such that M predetermined linear combinations of any K
rows of F are sufficient to generate each of {aT1 , . . . , aTM }, and any row of F has sparsity at most
T
s= N
P (P ? K + M ). Each sparse row of F (say fi ) is sent to the i-th processor (i = 1, . . . , P )
and dot-products of x with all sparse rows are computed in parallel. Let Si denote the support
(set of non-zero indices) of fi . Thus, for any unknown vector x, short dot products of length
|Si | ? s = N
P (P ? K + M ) are computed on each processor. Since the linear combination of any
K rows of F can generate the rows of A, i.e., {aT1 , aT2 , . . . , aTM }, the dot-product from the earliest
K out of P processors can be linearly combined to obtain the linear transform Ax. Before formally
stating our algorithm, we first provide an insight into why such a matrix F exists in the following
theorem, and develop an intuition on the construction strategy.
Theorem 1 Given row vectors {aT1 , aT2 , . . . , aTM }, there exists a P ? N matrix F such that a linear
combination of any K(> M ) rows of the matrix is sufficient to generate the row vectors and each
row of F has sparsity at most s = N
P (P ? K + M ), provided P divides N .
Proof: We may append (K ? M ) rows to A = [a1 , a2 , . . . , aM ]T , to form a K ? N matrix
? = [a1 , a2 , . . . , aM , z1 , . . . , zK?M ]T . The precise choice of these additional vectors will be made
A
explicit later. Next, we choose B, a P ? K matrix such that any square sub-matrix of B is invertible.
E.g., A Vandermonde or Cauchy Matrix, or a matrix with i.i.d. Gaussian entries can be shown to
satisfy this property with probability 1. The following lemma shows that any K rows of the matrix
? are sufficient to generate any row of A,
? including {aT1 , aT2 , . . . , aT }:
BA
M
? where A
? is a K ? N matrix and B is any (P ? K) matrix such that every
Lemma 1 Let F = B A
square sub-matrix is invertible. Then, any K rows of F can be linearly combined to generate any
?
row of A.
Proof: Choose an arbitrary index set ? ? {1, 2, . . . , P } such that |?| = K. Let F ? be the sub-matrix
? Now, B ? is a K ? K sub-matrix
formed by chosen K rows of F indexed by ?. Then, F ? = B ? A.
? ?1 ?
?
? is [i-th Row of (B ? )?1 ]F ?
of B, and is thus invertible. Thus, A = (B ) F . The i-th row of A
? is generated by the chosen K rows of F .
for i = 1, 2, . . . , K. Thus, each row of A
In the next lemma, we show how the row sparsity of F can be constrained to be at most
by appropriately choosing the appended vectors z1 , . . . , zK?M .
N
P (P ?K+M )
? = [a1 , . . . , aM , z1 , . . . , zK?M ]T
Lemma 2 Given an M ? N matrix A = [a1 , . . . , aM ]T , let A
be a K ? N matrix formed by appending K ? M row vectors to A. Also let B be a P ? K matrix
such that every square matrix is invertible. Then there exists a choice of the appended vectors
? has sparsity at most s = N (P ? K + M ).
z1 , . . . , zK?M such that each row of F = B A
P
Proof: We select a sparsity pattern that we want to enforce on F and then show that there exists a
choice of the appended vectors z1 , . . . , zK?M such that the pattern can be enforced.
Sparsity Pattern enforced on F : This is illustrated in Fig. 4. First, we construct a P ? P ?unit
block? with a cyclic structure of nonzero entries, where (K ? M ) zeros in each row and column
are arranged as shown in Fig. 4. Each row and column have at most sc = P ? K + M non-zero
entries. This unit block is replicated horizontally N/P times to form an P ? N matrix with at most
4
sc non-zero entries in each column, and and at most s = N sr /P non-zero entries in each row. We
now show how choice of z1 , . . . , zK?M can enforce this pattern on F .
Figure 4: Sparsity pattern of F : (Left) Unit Block (P ? P ); (Right) Unit Block concatenated N/P
times to form N ? P matrix F with row sparsity at most s.
? the j-th column of F can be written as, Fj = B A
?j . Each column of F has at
From F = B A,
least K ? M zeros at locations indexed by U ? {1, 2, . . . , P }. Let B U denote a ((K ? M ) ? K)
?j = [0](K?M )?1 . Divide
sub-matrix of B consisting of the rows of B indexed by U . Thus, B U A
?j into two portions of lengths M and K ? M as follows:
A
?j = [AT | z T ]T = [a1 (j) a2 (j) . . . aM (j) z1 (j) . . . zK?M (j)]T
A
j
Here Aj = [a1 (j) a2 (j) . . . aM (j)]T is actually the j-th column of given matrix A and z =
[z1 (j), . . . zK?M (j)]T depends on the choice of the appended vectors. Thus,
U
U
U
U
Bcols
? Bcols
1:M Aj + Bcols M +1:K z = [0]K?M ?1
M +1:K z = ?Bcols 1:M [Aj ]
U
?1
U
? [ z ] = ?(Bcols
Bcols
M +1:K )
1:M [Aj ]
(1)
U
where the last step uses the fact that [Bcols
M +1:K ] is invertible because it is a (K ? M ) ? (K ? M )
?
square sub-matrix of B. This explicitly provides the vector z which completes the j-th column of A.
?
The other columns of A can be completed similarly, proving the lemma.
From Lemmas 1 and 2, for a given M ? N matrix A, there always exists a P ? N matrix F such
that a linear combination of any K columns of F is sufficient to generate our given vectors and each
row of F has sparsity at most s = N
P (P ? K + M ). This proves the theorem.
With this insight in mind, we now formally state our computation strategy:
Algorithm 1 Short-Dot
[A] Pre-Processing Step: Encode F (Performed Offline)
Given: AM ?N = [a1 , . . . , aM ]T = [A1 , A2 , . . . , AN ], parameter K, M atrix BP ?K
1: For j = 1 to N do
2:
Set U ? ({(j ? 1), . . . , (j + K ? M ? 1)} mod P ) + 1
3:
B The set of (K ? M ) indices that are 0 for the j-th column of F
4:
Set B U ? Rows of B indexed by U
U
?1
U
5:
Set [ z ] = ?(Bcols
Bcols
B z(K?M )?1 is a row vector.
M +1:K )
1:M [Aj ]
T
T T
6:
Set Fj = B[Aj |z ]
B Fj is a column vector ( j-th col of F )
Encoded Output: FP ?N = [f1 f2 . . . fP ]T
B Row representation of matrix F
7: For i = 1 to P do
8:
Store Si ? Support(fi )
B Indices of non-zero entries in the i-th row of F
Si
9:
Send fi to i-th processor
B i-th row of F sent to i-th processor
[B] Online computations
External Input : x
Resources: P parallel processors (P > M )
[B1] Parallelization Strategy: Divide task among parallel processors:
1: For i = 1 to P do
2:
Send xSi to the i-th processor
Compute at i-th processor: hfiSi , xSi i B uS denotes only the rows of vector u indexed by S
3:
Output: hfiSi , xSi i from K earliest processors
5
[B2] Fusion Node: Decode the dot-products from the processor outputs:
Set V ? Indices of the K processors that finished first
Set B V ? Rows of B indexed by V
B Col Vector of outputs from first K processors
Set vK?1 ? [hfiSi , xSi i, ? i ? V ]
Set Ax = [ha1 , xi, . . . , haM , xi]T ? [(B V )?1 ]rows 1:M v
Output: hx, a1 i, . . . , hx, aM i
1:
2:
3:
4:
5:
Table 1: Trade-off between the length of the dot-products and parameter K for different strategies
Strategy
Length
Repetition
N
Parameter K
P
P? M
+1
MDS
Short-Dot
N
s
M
P ? PNs + M
Strategy
Repetition with
block partition
Short-MDS
Length Parameter K
k
j
P
+1
s
P ? M dN/se
s
P?
j
P
dN/se
k
+M
Remark 1: Short-MDS - a special case of Short-Dot An extension of the MDS codes-based
strategy proposed in [1], that we call Short-MDS can be designed to achieve row-sparsity s. First
block-partition the matrix of N columns, into dN/se sub-matrices of size M ? s, and also divide
the total processors P equally into dN/se parts. Now, each sub-matrix can be encoded using
P
a ( dN/se
, M ) MDS code. In the worst case, including all integer effects, this strategy requires
j
k
P
K = P ? dN/se
+ M processors to finish. In comparison, Short-Dot requires K = P ? PNs + M
processors to finish. In the regime where, s exactly divides N , Short-MDS can be viewed as a special
case of Short-Dot, as both the expressions match. However, in the regime where s does not exactly
divide N , Short-MDS requires more processors to finish in the worst case than Short-Dot. Short-Dot
is a generalized framework that can achieve a wider variety of pre-specified sparsity patterns as
required by the application. In Table 1, we compare the lengths of the dot-products and straggler
resilience K, i.e., the number of processors to wait for in worst case, for different strategies.
3
Limits on trade-off between the length of dot-products and parameter K
Theorem 2 Let AM ?N be any matrix such that each column has at least one non-zero element. If
the linear combination of any K rows of F(P ?N ) can generate M rows of AM ?N , then the average
N
sparsity s of each row of F(P ?N ) must satisfy s ? N 1 ? K
P + P.
Proof: We claim that K is strictly greater than the maximum number of zeros that can occur
in any column of the matrix F . If not, suppose the j-th column of F has more than K zeros.
Then there exists a linear combination of K rows of F that will always have 0 at the j-th column
index and it is not possible to generate any row of the given matrix A. Thus, K is no less than
1 + M ax N o. of 0s in any column of F . Since, maximum value is always greater than average,
K ? 1 + Avg. N o. of 0s in any column of F ? 1 +
(N ? s)P
.
N
(2)
A slight re-arrangement establishes the aforementioned lower
bound.
NM
Short-Dot achieves a row-sparsity of at most s = N 1 ? K
+
while
the
lower
bound
for
any
P
P
N
such strategy is s ? N 1 ? K
P + P . Notice that the bounds only differ in the second term. We
believe that the difference in the bounds arises due to the looseness of the fundamental limit: our
technique is based on derivation for M = 1 (bound is tight), and could be tightened for M > 1.
4
Analysis of expected computation time for exponential tail models
We now provide a probabilistic analysis of the computational time required by Short-Dot and compare
it with uncoded parallel processing, repetition and MDS codes as shown in Fig. 5. Table 2 shows
the order-sense expected computation time in the regimes where M is linear and sub-linear in P .
A detailed analysis is provided in the supplement. Assume that the time required by a processor to
6
Figure 5: Expected computation time: Short-Dot is faster than MDS when M P and Uncoded
when M ? P , and is universally faster over the entire range of M . For the choice of straggling
parameter, Repetition is slowest. When M does not exactly divide P , the distribution of computation
time for repetition and uncoded strategies is the maximum of non-identical but independent random
variables, which produce the ripples in these curves (see supplement for details).
compute a single dot-product follows an exponential distribution and is independent of the other
processors, as described in [1]. Let the time required to compute
a single dot-product of length N
be distributed as: Pr(TN ? t) = 1 ? exp ?? Nt ? 1
? t ? N. Here, ? is the ?straggling
parameter? that determines the unpredictable latency in computation time. For an s length dot product,
we simply replace N by s .The expected computation time for Short-Dot is the expected value of the
K-th order statistic of these P iid exponential random variables, which is given by:
!
!
P
P
log( P ?K
log( P ?K
)
)
(P ? K + M )N
=
1+
.
(3)
E(T ) = s 1 +
?
P
?
Here, (3) uses the fact that the expected value of the K-th statistic of P iid exponential random
PP
PP ?K
variables with parameter 1 is i=1 1i ? i=1 1i ? log(P ) ? log(P ? K) [1]. The expected
computation time in the RHS of (3) is minimized
when P? K = ?(M ). This minimal expected
time is O( MPN ) for M linear in P and is O
Strategy
Only one Processor
Uncoded (M divides P)2
Repetition (M divides P) 2
MDS
Short-Dot
2
M N log(P/M )
P
for M sub-linear in P .
Table 2: Probabilistic Computation Times
E(T )
M linear in P
M N 1 + ?1
? (M N )
log(P
)
MN
? MPN log(P )
P 1 +
?
)
N 1 + M log(M
? MPN log(P )
P
?
P
log( P ?M
)
N 1+
?(N )
?
P
log( P ?K
)
N (P ?K+M )
1+
O( MPN )
P
?
M sub-linear in P
? (M N )
?
MN
P
log(P )
? (N )
?(N )
O
MN
P
log
P
M
Refer to Supplement for more accurate analysis taking integer effects into account
Encoding and Decoding Complexity: Even though encoding is a pre-processing step (since A is
assumed to be given in advance), we include a complexity analysis for the sake of completeness. The
encoding requires N
P matrix inversions of size (K ? M ), and a P ? K matrix multiplication with
3
a K ? N matrix. The naive encoding complexity is therefore O( N
P (K ? M ) + N KP ). This is
higher than MDS codes that has an encoding complexity of O(N M P )), but it is only a one-time cost
that provides savings in online steps (as discussed earlier in this section). The decoding complexity
of Short-Dot is O(K 3 + KM ) which does not depend on N when M, K N . This is nearly the
same as O(M 3 + M 2 ) complexity of MDS codes. We believe that the complexities might be reduced
further, based on special choices of encoding matrix B.
7
Table 3: Experimental computation time of 10000 dot products (N = 785, M = 10, P = 20)
Strategy
Parameter K Mean
STDEV
Minimum Time Maximum Time
5
Uncoded
20
11.8653 2.8427
9.5192
27.0818
Short-Dot
18
10.4306 0.9253
8.2145
11.8340
MDS
10
15.3411 0.8987
13.8232
17.5416
Experimental Results
We perform experiments on computing clusters at CMU to test the computational time. We use
HTCondor [20] to schedule jobs simultaneously among the P processors. We compare the time
required to classify 10000 handwritten digits of the MNIST [21] database, assuming we are given a
trained 1-layer Neural Network. We separately trained the Neural network using training samples, to
form a matrix of weights, denoted by A10?785 . For testing, the multiplication of this given 10 ? 785
matrix, with the test data matrix X785?10000 is considered. The total number of processors was 20.
Assuming that A10?785 is encoded into F20?785 in a pre-processing step, we store the rows of F
in each processor apriori. Now portions of the data matrix X of size s ? 10000 are sent to each of
the P parallel processors as input. We also send a C-program to compute dot-products of length
s= N
P (P ? K + M ) with appropriate rows of F using command condor-submit. Each processor
outputs the value of one dot-product. The computation time reported in Fig. 6 includes the total time
required to communicate inputs to each processor, compute the dot-products in parallel, fetch the
required outputs, decode and classify all the 10000 test-images, based on 35 experimental runs.
Figure 6: Experimental results: (Left) Mean computation time for Uncoded Strategy, Short-Dot
(K=18) and MDS codes: Short-Dot is faster than MDS by 32% and Uncoded by 12%. (Right) Scatter
plot of computation time for different experimental runs: Short-Dot is faster most of the time.
Key Observations: (See Table 3 for detailed results). Computation time varies based on nature of
straggling, at the particular instant of the experimental run. Short-Dot outperforms both MDS and
Uncoded, in mean computation time. Uncoded is faster than MDS since per-processor computation
time for MDS is larger, and it increases the straggling, even though MDS waits for only for 10 out of
20 processors. However, note that Uncoded has more variability than both MDS and Short-Dot, and
its maximum time observed during the experiment is much greater than both MDS and Short-Dot.
The classification accuracy was 85.98% on test data.
6
Discussion
While we have presented the case of M < P here, Short-Dot easily generalizes to the case where
M ? P . The matrix can be divided horizontally into several chunks along the row dimension (shorter
matrices) and Short-Dot can be applied on each of those chunks one after another. Moreover if rows
with same sparsity pattern are grouped together and stored in the same processor initially, then the
communication cost is also significantly reduced during the online computations, since only some
elements of the unknown vector x are sent to a particular processor.
Acknowledgments: Systems on Nanoscale Information fabriCs (SONIC), one of the six SRC
STARnet Centers, sponsored by MARCO and DARPA. We also acknowledge NSF Awards 1350314,
1464336 and 1553248. S Dutta also received Prabhu and Poonam Goel Graduate Fellowship.
8
References
[1] Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan
Ramchandran. Speeding Up Distributed Machine Learning Using Codes. NIPS Workshop on
Learning Systems, 2015.
[2] Da Wang, Gauri Joshi, and Gregory Wornell. Using straggler replication to reduce latency
in large-scale parallel computing. In ACM SIGMETRICS Performance Evaluation Review,
volume 43, pages 7?11, 2015.
[3] Da Wang, Gauri Joshi, and Gregory Wornell. Efficient Task Replication for Fast Response Times
in Parallel Computation. In ACM SIGMETRICS Performance Evaluation Review, volume 42,
pages 599?600, 2014.
[4] Gauri Joshi, Yanpei Liu, and Emina Soljanin. On the delay-storage trade-off in content download
from coded distributed storage systems. IEEE Journal on Selected Areas in Communications,
32(5):989?997, 2014.
[5] Longbo Huang, Sameer Pawar, Hao Zhang, and Kannan Ramchandran. Codes can reduce
queueing delay in data centers. In Proceedings IEEE International Symposium on Information
Theory (ISIT), pages 2766?2770, 2012.
[6] William Dally. High-performance hardware for machine learning. NIPS Tutorial, 2015.
[7] Vipin Kumar, Ananth Grama, Gupta Anshul, and George Karypis. Introduction to Parallel
Computing: Design and Analysis of Algorithms. The Benjamin/Cummings Publishing Company,
Inc., Redwood City, 1994.
[8] Geoffrey C Fox, Steve W Otto, and Anthony JG Hey. Matrix algorithms on a hypercube I:
Matrix multiplication. Parallel computing, 4(1):17?31, 1987.
[9] Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354?356,
1969.
[10] Grey Ballard, James Demmel, Olga Holtz, and Oded Schwartz. Communication costs of
strassen?s matrix multiplication. Communications of the ACM, 57(2):107?114, 2014.
[11] Jeffrey Dean and Luiz Andr? Barroso. The tail at scale. Communications of the ACM, 56(2):74?
80, 2013.
[12] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert Chansler. The Hadoop
Distributed File System. In Proceedings IEEE Symposium on Mass Storage Systems and
Technologies (MSST), pages 1?10, 2010.
[13] Kuang-Hua Huang and Jacob A. Abraham. Algorithm-based fault tolerance for matrix operations. IEEE transactions on computers, 100(6):518?528, 1984.
[14] Thomas Herault and Yves Robert. Fault-Tolerance Techniques for High Performance Computing.
Springer, 2015.
[15] William Ryan and Shu Lin. Channel codes: Classical and Modern. Cambridge University
Press, 2009.
[16] Songze Li, Mohammad Ali Maddah-Ali, and A Salman Avestimehr. A unified coding framework
for distributed computing with straggling servers. arXiv:1609.01690v1 [cs.IT], 2016.
[17] Ihab Nahlus, Eric P Kim, Naresh R Shanbhag, and David Blaauw. Energy-efficient Dot-Product
Computation using a Switched Analog Circuit Architecture. In International Symposium on
Low Power Electronics and Design (ISLPED), pages 315?318, 2014.
[18] Ning C Wang, Sujan K Gonugondla, Ihab Nahlus, Naresh Shanbhag, and Eric Pop. GDOT: a
Graphene-Based Nanofunction for Dot-Product Computation. In IEEE Symposium on VLSI
Technology, 2016.
[19] HTCondor. https://research.cs.wisc.edu/htcondor/.
[20] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The MNIST database of handwritten
digits. http://yann.lecun.com/exdb/mnist, 1998.
9
| 6329 |@word inversion:1 grey:1 km:1 decomposition:2 jacob:1 atrix:1 reduction:1 electronics:1 cyclic:1 liu:1 exclusively:1 necessity:1 outperforms:1 existing:3 com:1 nt:1 si:4 scatter:1 written:1 must:1 nanoscale:1 partition:3 predetermined:1 designed:2 plot:1 sponsored:1 selected:1 device:1 short:54 provides:2 completeness:1 node:9 location:3 zhang:1 dn:6 along:1 symposium:4 replication:2 introduce:2 theoretically:1 expected:9 strassen:3 inspired:2 company:1 unpredictable:2 increasing:3 spain:1 provided:2 moreover:1 circuit:1 mass:1 unified:1 combat:2 every:3 fat:1 exactly:3 schwartz:1 partitioning:4 unit:6 before:1 service:1 resilience:4 limit:4 encoding:6 becoming:1 might:3 collect:1 suggests:1 range:1 graduate:1 karypis:1 acknowledgment:1 lecun:2 testing:1 recursive:1 block:12 digit:2 erasure:2 area:1 significantly:1 projection:1 pre:6 wait:9 faulty:1 context:1 storage:3 dean:1 center:2 send:3 go:1 distributedly:1 numerische:1 insight:2 proving:1 papailiopoulos:1 construction:1 suppose:1 decode:2 us:3 designing:2 element:4 database:2 anshul:1 observed:2 cloud:1 wang:3 worst:4 wornell:2 mpn:4 trade:5 src:1 intuition:1 ham:1 benjamin:1 complexity:8 engr:1 straggler:16 tight:1 depend:1 trained:2 ali:2 upon:1 f2:2 eric:2 easily:1 darpa:1 fabric:1 various:2 stdev:1 derivation:1 grama:1 fast:2 demmel:1 kp:1 sc:2 choosing:1 encoded:3 larger:3 say:1 reconstruct:1 otto:1 statistic:2 transform:4 online:3 propose:2 lam:1 product:49 reset:1 date:1 achieve:2 straggling:5 convergence:1 cluster:2 ripple:1 produce:1 wider:1 derive:2 andrew:2 stating:1 develop:1 received:1 job:1 dividing:1 c:2 come:1 differ:1 direction:1 ning:1 elimination:1 require:2 hx:2 f1:2 generalization:1 preliminary:1 isit:1 ryan:1 extension:2 strictly:1 correction:2 marco:1 considered:1 exp:1 algorithmic:1 claim:1 achieves:1 adopt:1 a2:6 grouped:1 repetition:17 create:1 successfully:3 establishes:1 city:1 offs:1 gaussian:2 always:3 sigmetrics:2 pn:1 cannon:1 volker:1 command:1 earliest:2 encode:1 ax:7 focus:1 vk:1 slowest:1 contrast:1 kim:1 am:13 sense:1 inference:1 entire:4 initially:1 vlsi:1 interested:1 overall:1 classification:2 among:4 aforementioned:2 denoted:1 herault:1 proposes:2 constrained:1 special:4 apriori:2 construct:6 saving:1 psu:1 identical:1 nearly:2 future:1 minimized:1 few:2 employ:1 modern:1 simultaneously:1 individual:3 consisting:1 jeffrey:1 william:2 detection:1 evaluation:2 accurate:1 condor:1 shorter:4 fox:2 pulkit:1 indexed:6 divide:11 re:1 minimal:1 increased:2 column:18 earlier:1 classify:2 konstantin:1 cost:4 introducing:2 addressing:1 subset:4 entry:6 kuang:2 delay:7 at2:3 stored:3 reported:1 varies:1 gregory:2 fetch:1 combined:3 chunk:2 fundamental:2 international:2 probabilistic:3 off:4 lee:1 decoding:2 invertible:5 together:1 nm:1 choose:2 huang:2 external:1 li:1 account:1 coding:3 b2:1 includes:2 inc:1 satisfy:2 jc:1 caused:2 explicitly:1 depends:1 later:1 performed:1 dally:1 portion:2 recover:1 parallel:22 kuk0:1 contribution:1 appended:4 formed:2 dutta:2 atm:3 square:4 accuracy:1 yves:1 efficiently:1 barroso:1 handwritten:2 iid:2 processor:67 a10:2 energy:1 pp:2 james:1 proof:4 attributed:1 dimensionality:1 schedule:1 starnet:1 actually:1 back:1 steve:1 higher:1 cummings:1 response:1 arranged:1 though:2 christopher:1 aj:6 lda:1 believe:3 building:2 effect:5 moore:2 nonzero:4 illustrated:1 deal:1 during:2 vipin:1 generalized:1 exdb:1 complete:2 demonstrate:1 mohammad:1 tn:1 performs:1 fj:3 image:1 novel:2 recently:1 fi:4 holtz:1 volume:2 analog:2 tail:3 discussed:2 slight:1 mellon:2 significant:1 refer:1 cambridge:1 similarly:1 jg:1 dot:85 etc:2 recent:2 forcing:1 pns:2 store:2 server:2 fault:3 minimum:1 additional:1 greater:3 george:1 goel:1 redundant:3 signal:1 multiple:3 sameer:1 reduces:1 faster:7 match:1 offer:4 long:2 lin:1 divided:2 equally:2 award:1 coded:5 a1:11 regression:1 maintenance:1 xsi:4 cmu:3 arxiv:1 achieved:1 avestimehr:1 want:1 separately:1 fellowship:1 completes:1 appropriately:1 parallelization:2 rest:1 sr:1 file:1 sent:4 mod:1 call:2 practitioner:1 integer:4 joshi:3 maddah:1 variety:1 finish:9 pennsylvania:1 architecture:1 reduce:5 idea:1 parameterizes:1 bottleneck:1 expression:1 pca:1 six:1 distributing:1 remark:2 latency:5 se:6 detailed:2 transforms:11 repeating:1 hardware:1 reduced:3 generate:9 http:2 nsf:1 tutorial:1 notice:1 andr:1 designer:1 per:2 carnegie:2 redundancy:5 key:1 queueing:1 wisc:1 resorted:1 v1:1 fraction:1 enforced:2 run:3 communicate:1 almost:1 yann:2 cadambe:1 bound:6 layer:1 completing:1 activity:1 occur:1 striped:2 bp:1 sake:1 fourier:1 speed:7 kumar:1 performing:1 separable:1 combination:6 across:4 smaller:2 increasingly:1 pr:1 resource:2 mathematik:1 loose:1 mechanism:1 mind:1 available:2 operation:2 generalizes:1 enforce:2 appropriate:1 appending:1 alternative:2 corinna:1 original:2 thomas:1 denotes:1 include:1 completed:1 publishing:1 instant:1 exploit:1 concatenated:1 uj:1 prof:1 classical:2 hypercube:1 arrangement:1 strategy:36 md:35 gauri:3 distance:1 cauchy:1 discriminant:1 prabhu:1 kannan:2 assuming:3 code:22 length:22 index:6 difficult:1 robert:2 hao:1 shu:1 append:1 ba:1 design:3 unknown:2 perform:2 looseness:1 observation:1 acknowledge:1 ever:1 communication:6 precise:1 variability:1 rn:4 redwood:1 arbitrary:1 download:1 david:1 bottle:1 required:12 specified:2 optimized:1 z1:8 barcelona:1 pop:1 nip:3 address:1 beyond:2 usually:1 pattern:7 dimitris:1 sanjay:1 regime:4 sparsity:20 summarize:1 fp:3 saturation:2 program:1 reliable:1 including:2 power:2 critical:1 rely:1 mn:3 technology:2 uncoded:17 finished:1 naive:1 speeding:2 faced:1 review:2 literature:1 multiplication:5 law:2 expect:1 interesting:2 limitation:1 filtering:1 grover:1 geoffrey:1 digital:1 at1:4 vandermonde:1 switched:1 sufficient:8 pedarsani:1 tightened:1 row:54 prone:1 repeat:1 last:1 offline:1 allow:1 burges:1 taking:2 sparse:2 distributed:10 tolerance:3 overcome:1 dimension:3 ha1:1 curve:1 computes:1 forward:1 made:1 avg:1 replicated:1 universally:1 transaction:1 preferred:1 b1:1 assumed:1 xi:3 luiz:1 why:2 table:6 nature:1 zk:8 ballard:1 channel:1 ignoring:1 hadoop:2 improving:1 constructing:3 anthony:1 submit:1 da:2 main:2 linearly:3 rh:1 abraham:1 naresh:2 fig:9 ananth:1 oded:1 fashion:1 slow:3 n:2 sub:12 explicit:1 exponential:5 col:2 wavelet:1 theorem:4 showing:1 salman:1 explored:1 cortes:1 gupta:1 fusion:8 exists:6 workshop:1 mnist:3 supplement:3 f20:1 ramchandran:2 maximilian:1 simply:2 horizontally:2 hey:1 hua:1 acquiring:1 springer:1 determines:1 acm:4 viewed:2 shared:1 replace:1 content:1 experimentally:1 distributes:1 principal:1 lemma:6 called:2 total:4 olga:1 experimental:7 formally:2 allotted:2 select:1 support:2 arises:1 |
5,891 | 633 | Planar Hidden Markov Modeling:
from Speech to Optical Character Recognition
Esther Levin and Roberto Pieraccini
AIT Bell Laboratories
600 Mountain Ave.
Murray Hill, NJ 07974
Abstract
We propose in this paper a statistical model (planar hidden Markov model PHMM) describing statistical properties of images. The model generalizes
the single-dimensional HMM, used for speech processing, to the planar case.
For this model to be useful an efficient segmentation algorithm, similar to the
Viterbi algorithm for HMM, must exist We present conditions in terms of
the PHMM parameters that are sufficient to guarantee that the planar
segmentation problem can be solved in polynomial time, and describe an
algorithm for that. This algorithm aligns optimally the image with the model,
and therefore is insensitive to elastic distortions of images. Using this
algorithm a joint optima1 segmentation and recognition of the image can be
performed, thus overcoming the weakness of traditional OCR systems where
segmentation is performed independently before the recognition leading to
unrecoverable recognition errors.
Tbe PHMM approach was evaluated using a set of isolated band-written
digits. An overall digit recognition accuracy of 95% was acbieved. An
analysis of the results showed that even in the simple case of recognition of
isolated characters, the elimination of elastic distortions enhances the
performance Significantly. We expect that the advantage of this approach will
be even more significant for tasks such as connected writing
recognition/spotting, for whicb there is no known high accuracy method of
recognition.
1 Introduction
The performance of traditional OCR systems deteriorate very quickly when documents
are degraded by noise, blur, and other forms of distortion. Tbe main reason for sucb
deterioration is that in addition to the intra-class cbaracter variability caused by distortion,
the segmentation of the text into words and characters becomes a nontrivial task. In most
of the traditional systems, such segmentation is done before recognition, leading to many
recognition errors, since recognition algorithms cannot usually recover from errors
introduced in the segmentation pbase. Moreover, in many cases the segmentation is illdefined, since many plausible segmentations migbt exist, and only grammatical and
linguistic analysis can find the "rigbt " one. To address these problems, an algorithm is
needed that can :
? be tolerant to distortions leading to intra-class variability
731
732
Levin and Pieraccini
? perform segmentation together with recogruuon, thus jointly optimizing both
processes, while incorporating grammatica1llinguistic constraints.
In this paper we describe a planar segmentation algorithm that has the above properties.
It results from a direct extension of the Viterbi (Forney, 1973) algorithm, widely used in
automatic speech recognition, to two-dimensional signals.
In the next section we desaibe the basic hidden Markov model and define the
segmentation problem. In section 3 we introduce the planar HMM that extends the HMM
concept to model images. The planar segmentation problem for PHMM is defined in
section 4. It was recently shown (Kearns and Levin, 1992) that the planar segmentation
problem is NP-hard, and therefore, in order to obtain an effective planar segmentation
algorithm, we propose to constrain the parameters of the PHMM. We show sufficient
conditions in terms of PHMM parameters for such algorithm to exist and describe the
algorithm. This approach differs from the one taken in references (Chellappa and
Chatterjee, 1985) and (Derin and Elliot, 1987), where instead of restricting the problem,
a suboptimal solution to the general problem was fmUld. Since in (Kearns and Levin,
1992) it was also shown that planar segmentation problem is hard to approximate, such
suboptimal solution doesn't have any guaranteed bounds. The segmentation algorithm
can now be used effectively not only for aligning isolated images, but also for joint
recognition/segmentation, eliminating the need of independent segmentation that usually
leads to unrecoverable errors in recognition. The same algorithm is used for estimation of
the parameters of the model given a set of example images. In section 5, results of
isolated hand-written digit recognition experiments are presented. The results indicate
that even in the simple case of isolated characters, the elimination of planar distottions
enhances the performance significantly. Section 6 contains the summary of this work.
2
Hidden Markov Model
The HMM is a statistical model that is used to describe temporal signals
G= (g(t): 1 ~t~ T, g E G c Rill in speech processing applications (Rabiner, 1989; Lee
et ai., 1990; Wilpon et ai., 1990; Pieraccini and Levin, 1991). The HMM is a composite
statistical source comprising a set s::: { 1, ... ,TR} of TR sources called states. The i-th
state, i E S, is characterized by its probability distribution Pj(g) over G. At each time t
only one of the states is active, emitting the observable g(t). We denote by s(t), s(t) E s
the random variable corresponding to the active state at time t. The joint probability
distribution (for real-valued g) or discrete probability mass (for g being a discrete
variable) P (s(t),g(t? for t > 1 is characterized by the following property:
P(s(t),g(t)
I s(1:t-l),g(1:t-l?=P(s(t) I s(t-l?
=P(s(t) I s(t-l? ps(r)(g(t? ,
P(g(t)
I s(t?==
(1)
for
the
sequence
(s(1), ... s(t-l) l,
and
g(l:t-l)= (g(1), ... ,g(t-l)}. We denote by a jj the transition probability
P(s(t)=j I s(t-l)=i), and by ~, the probability of state i being active at t=l,
1tj =P(s(1)=i). The probability of the entire sequence of states S=s(1:n and
observations G=g(1:T) can be expressed as
where
s(l:t-l)
stands
T
P(G,S)=1ts(1)Ps(1)(g(1?
n
as(r-l)s(r)
Ps(r)(g(t?.
(2)
r=2
The interpretation of equations (1) and (2) is that the observable sequence G is generated
in two stages: first, a sequence S of T states is chosen according to the Markovian
disfribution parametrized by {a jj } and {1t;}; then each one of the states s (t), 1~~T, in S
generates an observable g(t) according to its own memoryless distribution PS(I)' forming
the observable sequence G. This model is called a hidLlen Markov model, because the
state sequence S is not given, and only the observation sequence G is known. A
particular case of this model, called a left-ta-right HMM, where a jj =0 for j<i, and
Planar Hidden Markov Modeling: from Speech to Optical Character Recognition
=I, is especially useful for speech recognition. In this case each state of the model
represents an unspecified acoustic unit, and due to the "left-to-rigbt" structure, the whole
word is modeled as a concatenation of such acoustic \D1its. The time spent in each of the
states is not fixed, and therefore the model can take into account the duration variability
between different utterances of the same word.
1t,
The segmAentation problem of HMM is that of estimating the most probable state
sequence S, given the observation G,
S=ar~P(S I G)=ar~P(G,S).
s
(3)
s
A
The problem of finding S through exhaustive search is of exponential complexity, since
there exist TTl possible state sequences, but it can be solved in polynomial time using a
dynamic programming approach (i.e. Viterbi algorithm). The segmentation plays a
central role in all HMM-based speech recognizers, since for connected speech it gives the
segmentation into words or sub-word units, and performs a recognition simultaneously, in
an optimal way. This is in contrast to sequential systems, in which the connected speech
is first segmented into wordslsubwords according to some rules, and than the individual
segments are recognized by computing the appropriate likelihoods, and where many
recognition errors are caused by tmrecoverable segmentation errors. Higher-level
syntactical knowledge can be integrated into decoding process through transition
probabilities between the models. The segmentation is also used for estimating the
HMMs parameters using a corpus of a training data.
3 The Two-Dimensional Case: Planar HMM
In
this
section
we
describe
a
statistical
model
for
planar
image
G={g(x,y):(x,y)e L x.Y , g e G}. We call this model "Planar HMM" (pHMM) and
design it to extend the advantages of conventional HMM to the two-dimensional case.
The PHMM is a composite source, comprising a set s = {(i,y), I~~R' l~y~YR} of
N=XRYR states. Each state in s is a stochastic source characterized by its probability
density Pi.y(g) over the space of observations g e G. It is convenient to think of the
states of the model as being located on a rectangular lattice where each state corresponds
to a pixel of the corresponding reference image. Similarly to the conventional HMM,
only one state is active in the generation of the (x,y)-th image pixel g (x,y). We denote
by s(x,y) e s the active state of the model that generates g (x,y). The joint distribution
governing the choice of active states and image values has the following Markovian
property:
I g(1:X, I:y-l), g(1:x-I,y), s(l:X,I:y-l),s(l:x-l,y?=
=P(g(x,y) I s(x,y? P(s(x,y) I s(x-l,y),s(x,y-l)=
=PS(z.y)(g(x,y? P(s(x,y) I s(x-l,y),s(x,y-l?=
P(g(x,y), s(x,y)
(4)
where g(l:X,y-l)= {g (x,y): (x,y) e Rx.y-d, g (1:x-l,y)= {g (l,y), ... ,g (x-l,y)}, and
s(l:X,l:y-l), s(1:x-l,y) are the active states involved in generating g(1:X,y-l),
g(1:x-l,y), respectively, and RX,y-l is an axis parallel rectangle between the origin and
the point (X,y-l). Similarly to the one-dimensional case, it is useful to define a left-toright bottom-up PHMM where P(s(x,y)=(m,n) I s(x-l,y)::=(i,j),s(x,y-l)=(k,l)):;t:{) only
when i9n and l~, that does not allow for "fold overs" in the state image. The
Markovian property (4) allows the lefl-t<rright bottom-up PHMM to model elastic
distortions among different realizations of the same image, similarly to the way the
Markovian property in left-to-right HMM handles temporal alignment We have chosen
this definition (4) of Markovian property rather than others (see for example Oerin and
Kelly, 1989) since it leads to the formulation of a segmentation problem which is similar
to the planar alignment defined in (Levin and Pieraccini, 1992).
733
734
Levin and Pieraccini
Using property (4), the joint likelihood of the image G =g(l:X, l:Y) and the state image
S=s(l:X, l:Y) can be written as
x y
(5)
P(G,S)= nnps(r,y)(g(x.y?
r=1 y .. 1
X
1I:s (I,I)
H
n as
Y
(.x-I, I),s(r,1)
r~
n
V
Y
as (I,y-I),s(l,y)
y~
X
n n As
(r-I ,y),s(r,y-I),s (r,y)'
y~z~
where:
I s (x-l,y) =(i,j), s (x.y-l) =(k,l) ),
a(i,j),(m,II) =P(s(x.l)=(m,n) I s(x-l,l)=(i,j?,
v
a(t,I),(m,II) =P (s (l,y) =(m, n) I s (l,y) =(k,1) ),
A (i,j),(t,i),(m,II) =P (s (x.y) =(m,n)
H
and
1I:ij =P(s(l, l)=(i,j?
denote the generalized transition probabilities of PHMM, Similarly to HMM, (5)
suggests that an image G is generated by the PHMM in two successive stages: in the first
stage the state matrix S iven~d according to the Markovian probability distribution
parametrized by {A}, {a }, {a }, and {1t}, In the second stage, the image value in the
(x,y)-th pixel is produced independently from other pixels according to the distribution of
the s(x,y)-th state ps(z,y)(g). As in HMM, the state matrix S in most of the applications
is not known, only G is observed,
4 Planar Segmentation Problem
A
The segmentation problem of PHMM is that of finding the state matrix S that best
explains the observable image G and defines an optimal alignment of the image to the
model. Solving this problem eliminates the sensitivity to inlra-class elastic distortions and
allows for simulqrneous segmentation/recognition of images similarly to the onedimensional case. S can be estimated as in (3) by S =Qrgmax P (G,S). If we approach this
s
maximization by exhaustive search, the computational complexity is exponential, since
there are (XR yR)XY different state matrices, Since the segmentation problem is NP-hard
(Kearns and Levin, 1992), we suggest to simplify the problem by constraining the
parameters of the PHMM, so that efficient segmentation algorithm can be found. In this
section we present conditions in terms of the generalized transitiop probabilities of
PHMM that are sufficient to guarantee that the most likely state image S can be computed
in polynomial time, and describe an algorithm for doing that.
A
For the problem of finding S to be solved in polynomial time. there should exist a
grouPin~G of the set s of states of the model into NG mutually exclusive l subsets of states
"(P'
s=U
"(po
The generalized transition probabilities should satisfy the two following
p=1
constraints with respect to such grouping:
H
A (i,j),(t,I),(m,II)
;to ; a(i,j), (m,II) ;to
onlyifthereexistsp, l$p$NG , sucbthat(i,j),(m,n) E "(p.
v
v
A (i,j),(t,I),(m,II) =A(i,i),(kl,I\),(m,II) ; a(t,I),(m,II) =a(tl,ld,(m.II)
(6)
(7)
I It is lXlssible to drop the mutually exclusiveness constraints by duplicating states, but then we have to
ensure that the number of subsets, NG , should be lXl1ynomial in the dimensions of the model XR ? YR ,
Planar Hidden Markov Modeling: from Speech to Optical Character Recognition
if there exists p , 1 Sp SNG. such that (k,l) , (khl l )
E
Yp.
Condition (6) means that the the left neighbor (i,j) of the state (m,n) in the state matrix S
must be a member of the same subset Yp as (m, n). Condition (7) means that the value of
transition probability A (i.j).(k.I).(Ift.,,) does not depend explicitly on the identity (k, l) of the
bottom neighboring state, but only on the subset Yp to which (k,l) belongs.
"
Under (6) and (7) the most likely state matrix S can be found using an algorithm
described in (Levin and Pieraccini, 1992). This algorithm makes use of the Viterbi
procedure at two different levels. In the first stage optimal segmentation is computed for
each subset yp with each image raw using Viterbi. Then global segmentation is fmmd,
through Viterbi, by combining optimally the segmentations obtained in the previous
stage.
Although conditions (6),(7) are hard to check in practice since any possible grouping of
the states has to be considered, they can be effectively used in constructive mode, i.e.,
chosing one particular grouping, and then imposing the constraints (6) and (7) on the
generalized transition probabilities with respect to this grouping. For example, if we
choose Yp= {(i,y) I IS.iSXR, =p }, 1 Sp S YR , then the constraints (6),(7) become:
y
A (i.j).(l.I).(Ift.,,);;':O,
H
a(i.i).(m.,,)
;;,:0 only for j
=n
,
(8)
and,
v
v
A (i.i).(l.I).(Ift.,,)=A(i.i).(kl.l).(Ift.,,), a(l.l).(m.,,) =a(ll.I).(Ift.,,)
for ISk lt k SXR ?
(9)
Note that constraints (6), (7) break the symmetry between the roles of the two
coordinates. Other sets of conditions can be obtained from (6) and (7) by coordinate
transformation. For example, the roles of the vertical and the horizontal axes can be
exchanged. A grouping and constraints set chosen for a particular application should
reflect the geometric properties of the images.
5 Experimental Results
The PHMM approach was tested on a writer-independent isolated handwritten digit
recognition application. The data we used in our experiments was collected from 12
subjects (6 for training and 6 for test). Each subject was asked to write 10 samples of
each digiL Samples were written in fixed-size boxes, therefore naturally size-normalized
and centered. Each sample in the database was represented by a 16x16 binary image.
Each character class (digit) was represented by a single PHMM, satisfying (6) and (7).
Each PHMM had a strictly left-to-right bottom-up structure, where the state matrix S was
restricted to contain every state of the model, i.e., states could not be skipped. All models
had the same number of states. Each state was represented by its own binary probability
distribution, i.e., the probability of a pixel being 1 (black) or 0 (white). We estimated
these probabilities from the training data with the following generalization of the Viterbi
training algorithm (Jelinek, 1976). For the initialization we uniformly divided each
training image into regions corresponding to the states of its model. The initial value of
P j (g=I) for the i-th state was obtained as a frequency count of the black pixels in the
corresponding region over all the samples of the same digiL Each iteration of the
algorithm consisted of two stages: first,,,the samples were aligned with the corresponding
model, by finding the best state matrix S. Then, a new frequency count for each state was
used to update P j (1), according to the obtained alignment. We noticed that the training
procedure converged usually after 2-4 iterations, and in all the experiments the algorithm
was stopped at the 10th iteration. The recognition was performed by assigning the test
sample to the class k for which the alignment likelihood was maximal.
735
736
Levin and Pieraccini
Table 1 shows the number of errors in the recognition of the training set and the test set
for different sizes of the models.
Number of states
XR=YR
6
8
9
10
11
12
16
Table 1:
Recognition Errors
Training
Test
78
82
36
50
48
35
26
32
21
38
42
18
36
64
Number of errors in the recognition of the training set and the test set for
different size of the models (out of 600 trials in both cases)
It is worth noting the following two points. First, the test error shows a minimum for
XR =YR =10 of 5%. By increasing or decreasing the number of states this error increases.
This phenomenon is due to the following:
1.
The typical under/over parametrization behavior.
2.
Increasing the number of states closer to the size of the modeled images reduces the
flexibility of the alignment procedure, making this a trivial uniform alignment
when XR = YR = 16.
Also, the training error decreases monotonically with increasing number of states up to
XR =YR = 16. This is again typical behavior for such systems, since by increasing the
number of states, the number of model parameters grows, improving the fit to the training
data. But when the number of states equals the dimensions of the sample images,
XR =YR =16, there is a sudden Significant increase in the training error. This behavior is
consistent with point (2) above.
Fig. 1 shows three sets of models with different numbers of states. The states of the
models in this figure are represented by squares, where the grey level of the square
encodes the probability P(g=I). The (6x6) state models have a very coarse
representation of the digits, because the number of states is so small. The (lOxl0) state
models appear much sharper than the (16x16) state models, due to their ability to align
the training samples.
This preliminary experiment shows that eliminating elastic distortions by the alignment
procedure discussed above plays an important role in the task of isolated character
recognition, improving the recognition accuracy significantly. Note that the simplicity of
this task does not stress the full power of the PHMM representation, since the data was
isolated, size-normalized, and centered. On this task, the achieved performance is
comparable to that of many other OCR systems. We expect that in harder tasks, involving
connected text, the advantage of the proposed method will enhance the performance.
Recently, this approach is being successfully applied to the task of recognition of noisy
degraded printed messages (Agazzi et aL, 1993).
6 Summary and Discussion
In this paper we describe a planar hidden Markov model and develop a planar
segmentation algorithm that generalizes the Viterbi procedure widely used in speech
recognition.
This algorithm can be used to perform joint optimal
recognition/segmentation of images incorporating some grammatical constraints and
tolerating intra-class elastic distortions. The PHMM approach was tested on an isolated,
hand-written digit recognition application. An analysis of the results indicate that even in
a simple case of isolated characters, the elimination of elastic distortions enhances
Planar Hidden Markov Modeling: from Speech to Optical Character Recognition
recognition performance significantly. We expect that the advantage of this approach will
be even more valuable in harder tasks, such as cursive writing recognition/spotting, for
which an effective solution using the current available techniques has not yet been found .
a?t ,~;.
?
.'.;;
$:
~:J
&
.:~.::.'.
,
:::::
:til:::..:.:.:
?fi .:::
Figure 1: Three sets of models with 6x6, lOxlO, and 16x16 states.
References
O. E. Agazzi, S. S. Kuo, E. Levin, R. Pieraccini, " Connected and Degraded Text
Recognition Using Planar Hidden Markov Models,"
Proc. Of Int. COnference on Acoustics Speech and Signal Processing, April 1993.
R. Chellappa, S. Chatterjee, "Classification of textures Using Gaussian Markov Random
Fields," IEEE Transactions on ASSP , Vol. 33, No.4, pp. 959-963, August 1985.
737
738
Levin and Pieraccini
H. Derin, H. Elliot, "Modeling and Segmentation of Noisy and Textured Images Using
Gibbs Random Fields," IEEE Transactions on PAMI, Vol. 9, No.1 pp. 39-55, January
1987.
H. Derin, P. A. Kelly, 'Discrete-Index Markov-Type Random Processes,' in IEEE
Proceedings, vol 77, #10, pp.1485-1510, 1989
G.D. Forney, "The Viterbi algorithm," Proc. IEEE. Mar. 1973.
F. Jelinek, "Continuous Speech Recognition by Statistical Methods," Proceedings of
IEEE, vol. 64, pp. 532-556, April 1976.
M. Keams, E. Levin, Unpublished, 1992.
C.-H. Lee, L. R. Rabiner, R. Pieraccini, J. G. Wilpon, "Acoustic Modeling for Large
Vocabulary Speech Recognition," Computer Speech and Language, 1990, No.4, pp.
127-165.
E. Levin, R. Pieraccini, "Dynamic Planar Warping and Planar Hidden Markov Modeling:
from Speech to Optical Character Recognition," submitted to IEEE Trans. on PAMl.
1992.
R. Pieraccini, E. Levin, "Stochastic Representation of Semantic Structure for Speech
Understanding," Proceedings of EUROSPEECH 91, Vo1.2, pp. 383-386, Genova,
September 1991.
L.R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in
Speech Recognition," Proc. IEEE, Feb. 1989.
1. G. Wilpon, L. R. Rabiner, C.-H. Lee, E. R. Goldman, "Automatic Recognition of
Keywords in Unconstrained Speech Using Hidden Markov Models," IEEE Trans. on
ASSP, Vol. 38, No. 11, pp 1870-1878, November 1990.
| 633 |@word trial:1 eliminating:2 polynomial:4 grey:1 tr:2 harder:2 ld:1 initial:1 contains:1 document:1 current:1 assigning:1 yet:1 must:2 written:5 blur:1 drop:1 update:1 selected:1 yr:9 parametrization:1 sudden:1 coarse:1 successive:1 direct:1 become:1 introduce:1 deteriorate:1 behavior:3 decreasing:1 goldman:1 increasing:4 becomes:1 estimating:2 moreover:1 mass:1 mountain:1 ttl:1 unspecified:1 iven:1 finding:4 transformation:1 nj:1 guarantee:2 temporal:2 duplicating:1 every:1 unit:2 appear:1 before:2 pami:1 black:2 initialization:1 suggests:1 hmms:1 practice:1 differs:1 xr:7 digit:7 procedure:5 bell:1 significantly:4 composite:2 convenient:1 printed:1 word:5 suggest:1 cannot:1 writing:2 conventional:2 independently:2 duration:1 rectangular:1 simplicity:1 rule:1 pbase:1 handle:1 coordinate:2 play:2 programming:1 pieraccini:12 origin:1 recognition:41 satisfying:1 located:1 database:1 bottom:4 role:4 observed:1 solved:3 region:2 connected:5 decrease:1 loxl0:1 valuable:1 complexity:2 asked:1 dynamic:2 depend:1 solving:1 segment:1 writer:1 textured:1 po:1 joint:6 represented:4 describe:7 effective:2 chellappa:2 exhaustive:2 widely:2 plausible:1 valued:1 distortion:10 ability:1 think:1 jointly:1 noisy:2 advantage:4 sequence:9 propose:2 maximal:1 neighboring:1 aligned:1 combining:1 realization:1 flexibility:1 p:6 generating:1 spent:1 develop:1 ij:1 keywords:1 indicate:2 stochastic:2 centered:2 elimination:3 explains:1 generalization:1 preliminary:1 probable:1 extension:1 strictly:1 considered:1 viterbi:9 exclusiveness:1 estimation:1 proc:3 derin:3 successfully:1 paml:1 gaussian:1 rather:1 linguistic:1 ax:1 likelihood:3 check:1 contrast:1 ave:1 skipped:1 esther:1 entire:1 integrated:1 hidden:12 keams:1 comprising:2 pixel:6 overall:1 among:1 classification:1 equal:1 field:2 ng:3 represents:1 np:2 others:1 simplify:1 simultaneously:1 individual:1 message:1 intra:3 unrecoverable:2 alignment:8 weakness:1 tj:1 closer:1 xy:1 exchanged:1 isolated:10 stopped:1 modeling:7 markovian:6 ar:2 lattice:1 maximization:1 subset:5 uniform:1 levin:15 eurospeech:1 optimally:2 density:1 sensitivity:1 lee:3 decoding:1 enhance:1 together:1 quickly:1 again:1 central:1 reflect:1 choose:1 khl:1 leading:3 til:1 yp:5 account:1 int:1 satisfy:1 caused:2 explicitly:1 performed:3 break:1 doing:1 recover:1 parallel:1 square:2 accuracy:3 degraded:3 rabiner:4 illdefined:1 raw:1 handwritten:1 produced:1 tolerating:1 rx:2 worth:1 converged:1 submitted:1 aligns:1 definition:1 frequency:2 involved:1 pp:7 naturally:1 knowledge:1 segmentation:35 ta:1 higher:1 x6:2 planar:24 april:2 formulation:1 evaluated:1 done:1 box:1 mar:1 governing:1 stage:7 hand:2 horizontal:1 defines:1 mode:1 grows:1 concept:1 normalized:2 contain:1 consisted:1 memoryless:1 laboratory:1 semantic:1 elliot:2 white:1 ll:1 generalized:4 hill:1 stress:1 performs:1 image:29 recently:2 fi:1 insensitive:1 extend:1 interpretation:1 discussed:1 onedimensional:1 significant:2 imposing:1 ai:2 gibbs:1 automatic:2 unconstrained:1 similarly:5 wilpon:3 language:1 had:2 recognizers:1 align:1 aligning:1 feb:1 own:2 showed:1 optimizing:1 belongs:1 binary:2 minimum:1 recognized:1 monotonically:1 signal:3 ii:9 full:1 reduces:1 segmented:1 characterized:3 divided:1 involving:1 basic:1 iteration:3 deterioration:1 achieved:1 addition:1 source:4 eliminates:1 subject:2 member:1 call:1 noting:1 constraining:1 fit:1 suboptimal:2 syntactical:1 speech:20 jj:3 useful:3 cursive:1 band:1 exist:5 tutorial:1 estimated:2 discrete:3 write:1 vol:5 pj:1 rectangle:1 tbe:2 extends:1 forney:2 toright:1 comparable:1 genova:1 bound:1 guaranteed:1 fold:1 nontrivial:1 constraint:8 constrain:1 encodes:1 generates:2 optical:5 according:6 character:11 making:1 restricted:1 taken:1 equation:1 mutually:2 describing:1 count:2 needed:1 loxlo:1 generalizes:2 available:1 ocr:3 appropriate:1 ensure:1 murray:1 especially:1 warping:1 noticed:1 exclusive:1 traditional:3 enhances:3 september:1 concatenation:1 hmm:16 parametrized:2 collected:1 trivial:1 reason:1 modeled:2 index:1 sharper:1 design:1 perform:2 vertical:1 observation:4 markov:15 november:1 t:1 january:1 variability:3 assp:2 august:1 overcoming:1 introduced:1 unpublished:1 kl:2 acoustic:4 trans:2 address:1 spotting:2 usually:3 power:1 axis:1 utterance:1 roberto:1 text:3 kelly:2 geometric:1 understanding:1 expect:3 generation:1 sufficient:3 consistent:1 pi:1 ift:5 summary:2 allow:1 neighbor:1 jelinek:2 grammatical:2 dimension:2 vocabulary:1 transition:6 stand:1 doesn:1 emitting:1 transaction:2 approximate:1 observable:5 global:1 active:7 tolerant:1 corpus:1 search:2 continuous:1 table:2 elastic:7 symmetry:1 improving:2 sp:2 main:1 whole:1 noise:1 ait:1 chosing:1 fig:1 tl:1 x16:3 sub:1 exponential:2 grouping:5 incorporating:2 exists:1 restricting:1 sequential:1 effectively:2 texture:1 chatterjee:2 lt:1 likely:2 forming:1 expressed:1 corresponds:1 nnps:1 identity:1 hard:4 typical:2 uniformly:1 vo1:1 rill:1 kearns:3 called:3 kuo:1 experimental:1 constructive:1 tested:2 phenomenon:1 isk:1 |
5,892 | 6,330 | Optimal Architectures in a Solvable Model of Deep
Networks
Jonathan Kadmon
The Racah Institute of Physics and ELSC
The Hebrew University, Israel
[email protected]
Haim Sompolinsky
The Racah Institute of Physics and ELSC
The Hebrew University, Israel
and
Center for Brain Science
Harvard University
Abstract
Deep neural networks have received a considerable attention due to the success
of their training for real world machine learning applications. They are also
of great interest to the understanding of sensory processing in cortical sensory
hierarchies. The purpose of this work is to advance our theoretical understanding of
the computational benefits of these architectures. Using a simple model of clustered
noisy inputs and a simple learning rule, we provide analytically derived recursion
relations describing the propagation of the signals along the deep network. By
analysis of these equations, and defining performance measures, we show that
these model networks have optimal depths. We further explore the dependence of
the optimal architecture on the system parameters.
1
Introduction
The use of deep feedforward neural networks in machine learning applications has become widespread
and has drawn considerable research attention in the past few years. Novel approaches for training
these structures to perform various computation are in constant development. However, there is still a
gap between our ability to produce and train deep structures to complete a task and our understanding
of the underlying computations. One interesting class of previously proposed models uses a series of
sequential of de-noising autoencoders (dA) to construct a deep architectures [5, 14]. At it base, the
dA receives a noisy version of a pre-learned pattern and retrieves the noiseless representation. Other
methods of constructing deep networks by unsupervised methods have been proposed including
the use of Restricted Boltzmann Machines (RBMs) [3, 12, 7]. Deep architectures have been of
interest also to neuroscience as many biological sensory systems (e.g., vision, audition, olfaction and
somatosensation, see e.g. [9, 13]) are organized in hierarchies of multiple processing stages. Despite
the impressive recent success in training deep networks, fundamental understanding of the merits and
limitations of signal processing in such architectures is still lacking.
A theory of deep network entails two dynamical processes. One is the dynamics of weight matrices
during learning. This problem is challenging even for linear architectures and progress has been
made recently on this front (see e.g. [11]). The other dynamical process is the propagation of the
signal and the information it carries through the nonlinear feedforward stages. In this work we
focus on the second challenge, by analyzing the ?signal and noise? neural dynamics in a solvable
model of deep networks. We assume a simple clustered structure of inputs where inputs take the
form of corrupted versions of a discrete set of cluster centers or ?patterns?. The goal of the multiple
processing layer is to reformat the inputs such that the noise is suppressed allowing for a linear
readout to perform classification tasks based on the top representations. We assume a simple learning
rule for the synaptic matrices, the well known Pseudo-Inverse rule [10]. The advantage of this choice,
beside its mathematics tractability, is the capacity for storing patterns. In particular, when the input
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
is noiseless, the propagating signals retain their desired representations with no distortion up to a
reasonable capacity limit. In addition, previous studies of this rule showed that these systems have a
considerable basins of attractions for pattern completion in a recurrent setting [8]. Here we study this
system in a deep feedforward architecture. Using mean field theory we derive recursion relations for
the propagation of signal and noise across the network layers, which are exact in the limit of large
network sizes. Analyzing this recursion dynamics, we show that for fixed overall number of neurons,
there is an optimal depth that minimizes the readout average classification error. We analyze the
optimal depth as a function of the system parameters such as load, sparsity, and the overall system
size.
2
Model of Feedforward Processing of Clustered Inputs
We consider a network model of sensory processing composed of three or more layers of neurons
arranged in a feedforward architecture (figure 1). The first layer, composed of N0 neuron is the
input or stimulus layer. The input layer projects into a sequence of one or more intermediate layers,
which we also refer to as processing layers. These layers can represent neurons in sensory cortices or
cortical-like structures. The simplest case is a single processing layer (figure 1.A). More generally, we
consider L processing layers with possibly different widths (figure 1.B). The last layer in the model is
the readout layer, which represents a downstream neural population that receives input from the top
processing layer and performs a specific computation, such as recognition of a specific stimulus or
classification of stimuli. For concreteness, we will use a layer of one or more readout binary neurons
that perform binary classifications on the inputs. For simplicity, all neurons in the network are binary
units, i.e., the activity level of each neuron is either 0 (silent) or 1 (firing). We denote Sli 2 {0, 1}, the
activity of the i 2 {1, . . . , Nl } neuron in the l = {1, . . . , L} layer; Nl denotes the size of the layer.
The level of sparsity of the neural code, i.e. the fraction f of active neurons for each stimulus, is set
by tuning the threshold Tl of the neurons in each layer (see below). For simplicity we will assume all
neurons (except for the readout) have the same sparsity,f .
Figure 1: Schematics of the network. The network receives input from N0 neurons and then projects
them onto an intermediate layer composed of Nt processing neurons. The neurons can be arranged in
a single (A) or multiple (B) layers. The readout layer receives input from the last processing layer.
Input The input to the network is organized as clusters around P activity patterns. At it center, each
i
cluster has a prototypical representation of an underlying specific stimulus, denoted as S?0,?
, where
i = 1, ..., N0 , denotes the index of the neuron in the input layer l = 0, and the index ? = 1, ..., P ,
denotes the pattern number. The probability of an input neuron to be firing is denoted by f0 . Other
members of the clusters are noisy versions of the central pattern, representing natural variations in the
stimulus representation due to changes in physical features in the world, input noise, or neural noise.
i
We model the noise as iid Bernoulli distribution. Each noisy input S0,?
from the ?th cluster, equals
i
i
?
?
S0,? ( S 0,? ) with probability (1 + m0 )/2, ((1 m0 )/2) respectively. Thus, the average overlap of
the noisy inputs with the central pattern, say ? = 1 is
*N
+
0
X
1
i
m0 =
S i f S?0,1
f
,
(1)
N0 f (1 f ) i=1 0
2
ranging from m0 = 1 denoting the noiseless limit, to m0 = 0 where the inputs are uncorrelated with
the centers. Topologically, the inputs are organized into clusters with radius 1 m0 .
Update rule The state Sli of the i-th neuron in the l > 0 layer is determined by thresholding the
weighted sum of the activities in the antecedent layer:
Sli = ? hil
(2)
Tl .
Here ? is the step function and the field hil represent the synaptic input to the neuron
Nl
hil =
X1
ij
Wl,l
1
j=1
?
Slj
?
f .
1
(3)
where the sparsity f is the mean activity level of the preceding layer (set by thresholding, Eq. (2)).
ij
Synaptic matrix A key question is how the connectivity matrix Wl,l
1 is chosen. Here we construct
the weight matrix by first allocating for each layer l , a set of P random templates ?l,? 2 {0, 1}N
(with mean activity f ), which are to serve as the representations of the P stimulus clusters in the layer.
Next, W has to be trained to ensure that the response, S?l,? , of the layer l to a noiseless inputs, S?0,? ,
equals ?l,? . Here we use an explicit recipe to enforce these relations, namely the pseudo-inverse (PI)
model [10, 8, 6], given by
ij
Wl,l
1
=
1
Nl 1 f (1
f ) ?,?=1
where
l
C??
=
P
X
1
Nl f (1
f)
i
?l,?
Nl
X
f
?
i
?l,?
?
1
Cl
1
f
i
?l,?
??
?
?lj
1,?
?
f ,
(4)
(5)
f
i=1
is the correlation matrix of the random templates in the lth layer. For completeness we also denote
?0,? = S?0,? . This learning rule guarantees that for noiseless inputs, i.e., S0 = ?0,? , the states of all
the layers are Sl,? = ?l,? . This will in turn allow for a perfect readout performance if noise is zero.
The capacity of this system is limited by the rank of C l so we require P < Nl [8].
A similar model of clustered inputs fed into a single processing layer has been studied in [1] using a
simpler, Hebbian projection weights.
3
Mean Field Equations for the Signal Propagation
To study the dynamics of the signal along the network layers, we assume that the input to the network
is a noisy version of one of the clusters, say, cluster ? = 1. In the notation above, the input is a state
{S0i } with an overlap m0 with the pattern ?0,1 . Information about the cluster identity of the input is
represented in subsequent layers through the overlap of the propagated state with the representation
of the same cluster in each layer; in our case, the overlap between the response of the layer l, Sl , and
?l,1 , defined similarly to Eq. (1), as:
1
ml =
Nl f (1
In each layer the load is defined as
f)
*N
l
X
Sli
?l =
P
.
Nl
i=1
f
i
?l,1
f
+
.
(6)
(7)
Using analytical mean field techniques (detailed in the supplementary material), exact in the limit of
large N , we find a recursive equation for the overlaps of different layers. In this limit the fields and
the fluctuations of the fields hil , assume Gaussian statistics as the realizations of the noisy input vary.
The overlaps are evaluated by thresholding these variables, given by
3
(l
2)
ml+1
where H(x) = (2?)
"
Tl+1
=H p
(1 f )ml
l+1 + Ql+1
#
?1
"
#
Tl+1 + f ml
H p
,
l+1 + Ql+1
(8)
dx exp( x2 /2). The threshold Tl is set for each layer by solving
"
#
"
#
Tl+1 (1 f )ml
Tl+1 + f ml
f = fH p
+ (1 f )H p
.
(9)
l+1 + Ql+1
l+1 + Ql+1
D
E
2
The factor l+1 + Ql+1 is the variance of the fields
hil+1
which has two contributions. The
first is due to the variance in the noisy responses of the previous layers, yielding
?l
f)
1 m2l .
(10)
l+1 = f (1
1 ?l
1/2
x
The second contribution comes from the spatial correlations between noisy responses of the previous
layers, yielding
"
#
"
#!2
2
2
1 2?l
(Tl (1 f )ml 1 )
(Tl + f ml 1 )
Ql+1 =
f exp
+ (1 f ) exp
.
2?(1 ?l )
2( l + Ql )
2( l + Ql )
(11)
Note that despite the fact that the noise in the different nodes of the input layer is uncorrelated, as the
signals propagate through the network, correlations between the noisy responses of different neurons
in the same layer emerge. These correlations depend on the particular realization of the random
templates, and will average to zero upon averaging over the templates. Nevertheless, they contribute
a non-random contribution to the total variance of the fields at each layer. Interestingly, for ?l > 1/2
this term becomes negative, and reduces the overall variance of the fields.
The above recursion equations hold for l
1 given by:
2. The initial conditions for this layer is Q1 = 0 and m1 ,
(Layer 1)
m1 = H
f = fH
and
?
?
T1
T1
(1
p
1
(1
p
f )m0
H
1
f )m0
+ (1
f)
T1 + f m0
p
,
?0
1
1 ?0
(12)
1
f )H
1
= f (1
?
?
T1 + f m1
p
,
m20 .
(13)
1
(14)
where ?0 = P/N0 .
Finally, we note that a previous analysis of the feedforward PI model (in the dense case, f = 0.5)
reported results [6] neglected the contribution Ql of the induced correlations to the field variance.
Indeed, their approximate equations fail to correctly describe the behavior of the system. As we will
show, our recursion relations fully accounts for the behavior of the network in the limit of large N .
Infinitely deep homogeneous network The above equations, eq (8)-(11) describe the dynamics
of the average overlap of the network states and the variance in the inputs to the neurons in each
layer. This dynamics depends on the sizes (and sparsity) of the different processing layers. Although
the above equations are general, from now on, we will assume homogeneous architecture in which
Nl = N = Nt /L (all with the same sparsity). To find the behavior of the signals as they propagate
along this infinitely deep homogenous network (l ! 1) we look for the fixed points of the recursion
equation.
Solution of the equations reveals three fixed points of the trajectories. Two of them are stable fixed
points, one at m = 0 and the other at m = 1. The third is an unstable fixed point at some intermediate
4
Figure 2: Overlap dynamics. (A) Trajectory of overlaps across layers from eq (8)-(11) (solid lines)
and simulations (circles). Dashed red line show the predicted separatrix m? . The deviation from the
theoretical prediction near the separatrix are due to final size effects of the simulations (? = 0.4,
f = 0.1). (B) Basin of attraction for two values of f as a function of ?. Line show theoretical
prediction and shaded area simulations. (C) Convergence time (number of layers) of the m = 1
attractor. Near the unstable fixed point (dashed vertical lines) convergence time diverges and rapidly
decreases for larger initial conditions, m0 > m? .
value m? . Initial conditions with overlaps obeying m0 > m? converge to 1, implying complete
suppression of the input noise, while those with m0 < m? lose all overlap with the central pattern
[figure 2.A], which depicts the values of the overlaps for different initial conditions. As expected, the
curves (analytical results derived by numerically iterating the above mean field equations) terminate
either at ml = 1 or ml = 0 for large l . The same holds for the numerical simulations (dots) except
for a few intermediate values of initial conditions that converge to an intermediate asymptotic values
of overlaps. These intermediate fixed points are ?finite size effects?. As the system size (Nt and
correspondingly N ) increases, the range of initial conditions that converge to intermediate fixed
points shrinks to zero. In general increasing the sparsity of the representations (i.e., reducing f
) improves the performance of the network. As seen in [figure 2.B] the basin of attraction of the
noiseless fixed point increases as f decreases.
Convergence time In general, the overlaps approach the noiseless state relatively fast, i.e., within
5 10 layers. This holds for initial conditions well within the basin of attraction of this fixed point.
If the initial condition is close to the boundary of the basin, i.e., m0 ? m? , convergence is slow. In
this case, the convergence time diverges as m0 ! m? from above [figure 2.C].
4
Optimal Architecture
We evaluate the performances of the network by the ability of readout neurons to correctly perform
randomly chosen binary linear classifications of the clusters. For concreteness we consider the
performance of a single readout neuron to perform a binary classification where for each central
pattern, the desired label is ?ro,? = 0, 1. The readout weights, projecting from the last processing
layer into the readout [figure 1] are assumed to be learned to perform the correct classification by
a pseudo-inverse rule, similar to the design of the processing weight matrices. The readout weight
matrix is given by
j
Wro
=
1
N fro (1
P
X
fro ) ?,?=1
(?ro,?
? ? 1? j
fro ) C L ?? ?L,?
?
f .
(15)
We assume the readout labels are iid Bernoulli variables with zero bias (fro = 0.5), though a bias can
be easily incorporated. The error of the readout is the probability of the neuron being in the opposite
state than the labels.
1 mro
?=
,
(16)
2
where mro is the average overlap of the readout layer, and can be calculated using the recursion
equations (8)-(11). However, Since generally f 6= fro , the activity factor need to be replaced in the
5
proper positions in the equations. For correctness, we bring the exact form of the readout equation in
the supplementary material.
4.1
Single infinite layer
In the following we explore the utility of deep architectures in performing the above tasks. Before
assessing quantitatively different architectures, we present a simple comparison between a single
infinitely wide layer and a deep network with a small number of finite-width layers.
An important result of our theory is that for a model with a single processing layer with finite f , the
overlap m1 and hence the classification error do not vanish even for a layer with infinite number of
neurons. This holds for all levels of input noise, i.e., as long as m0 < 1. This can be seen by setting
? = 0 in equations (8)-(11) for L = 2 . Note that although the variance contribution to the noise in
the field, ro vanishes, the contribution from the correlations, Q1 , remains finite and is responsible
for the fact that mro < 1 and ? > 0 [1]. In contrast, in a deep network, if the initial overlap is within
the basin of attraction the m = 1 solution, the overlap quickly approach m = 1 [figure (2).C]. This
suggests that a deep architecture will generally perform better than a single layer, as can be seen in
the example in figure 3.A.
Mean error The readout error depends on the level of the initial noise (i.e., the value of m0 ). Here
we introduce a global measure of performance, E , defined as the readout error averaged over the
initial overlaps,
E=
?1
dm0 ? (m0 ) ? (m0 ) ,
(17)
0
where the ?(m0 ) is the distribution of cluster sizes. For simplicity we use here a uniform distribution
? = 1. The mean error is a function of the parameters of the network, namely the sparsity f , the input
and total loads ?0 = P/N0 , ?t = P/Nt respectively, and the number of layers L, which describes
the layout of the network. We are now ready to compare the performance of different architectures.
4.2
Limited resources
In any real setting, the resources of the network are limited. This may be due to finite number of
available neurons or a limit on the computational power. To evaluate the optimal architecture under
constraints of a fixed total number of neurons, we assume that the total number of neurons is fixed
to Nt = ?N0 , where N0 is the size of the input layer. As in the analysis above, we consider for
simplicity alternative uniform architectures in which all processing layers are of equal size N = Nt /L
. The performance as a function of the number of layers is shown in figure 3.B which depicts the
mean error against the number of processing layers L for several values of the expansion factor?.
These curves show that the error has a minimum at a finite depth
Lopt = arg min E(L).
L
(18)
The reason for this is that for shallower networks, the overlaps have not been iterated sufficient
number of times and hence remain further from the noiseless fixed point. On the other hand, deeper
networks will have an increased load at each layer, since
P
?=
L,
(19)
?N0
thereby reducing the noise suppression of each layer. As seen in the figure, increasing the total
number of neurons, yields a lower mean error Eopt , and increases the the optimal depth on the
network. Note however, that for large ? , the mean error rises slowly for L larger than its optimal
value; this is is because the error changes very slowly with ? for small ?. and remains close to its
? = 0 value. Thus, increasing the depth moderately above Lopt may not harm significantly the
performance. Ultimately, if L increases to the order of ?N/P , the load in each processing layer
? approaches 1, and the performance deteriorates drastically. Other considerations, such as time
required for computation may favor shallower architectures, and in practice will limit the utility of
architectures deeper than Lopt .
6
Figure 3: Optimal layout. (A) Comparing readout error produced by the same initial condition
(m0 = 0.6) of a single, infinitely-wide processing layer to that of a deep architecture with ? = 0.2.
For both networks ?0 = 0.7, f = 0.15 and m0 = 0.6. (B) Mean error as a function of the number
of the processing layers for three values of expansion factor ? = Nt /N0 . Dashed line shows the
error of a single infinite layer. (C) Optimal number of layers as a function of the inverse of the input
load (?0 / P ), for different values of sparsity. Lines show linear regression on the data points. (D)
minimal error as a function of the input load (number of stored templates). Same color code as (C).
The effect of load on the optimal architecture If the overall number of neurons in the network is
fixed, then the optimal layout Lopt is a function of the size of the dataset, i.e, P . For large P , the
optimal network becomes shallow. This is because that when the load is high, resources are better
allocated to constrain ? as much as possible, due to the high readout error when ? is close to 1,
figures C and D . As shown in [figure 3.D], Lopt increases with decreasing the load, scaling as
1/2
.
(20)
Nopt / P 1/2 .
(21)
Lopt / P
This implies that the width Nopt scales as
4.3
Autoencoder example
The model above assumes inputs in the form of random patterns (?0,? ) corrupted by noise. Here
we illustrate that the qualitative behavior of the network for inputs generated by handwritten digits
(MNIST dataset) with random corruptions. To visualize the suppression of noise by the deep pseudoinverse network, we train the network with autoencoder readout layer, namely use a readout layer of
size N0 and readout labels equal the original noiseless images, ?ro,? = ?0,? . The readout weights
are Pseudo-inverse weights with output labels identical to the input patterns, and following eq. (15).
[? 2]. A perfect overlap at the readout layer implies perfect reconstruction of the original noiseless
pattern.
In figure 4, two networks were trained as autoencoders on a set of templates composed of 3-digit
numbers (See experimental procedures in the supplementary material). Both networks have the same
number of neurons. In the first, all processing neurons are placed in a single wide layer, while in the
other neurons were divided into 10 equally-sized layers. As the theory predicts, the deep structure
is able to reproduce the original templates for a wide range of initial noise, while the single layer
typically reduces the noise but fails to reproduce the original image.
7
Figure 4: Visual example of the difference between a single processing layer and a deep structure. Input data was prepared using the MNIST handwritten digit database. Example of the templates
are shown on the top row. Two different networks were trained to autoencode the inputs, one with
all the processing neurons in a single layer (figure 1.A) and one in which the neurons were divided
equally between 10 layers (figure 1.B) (See experimental procedures in the supplementary material
for details). A noisy version of the templates were introduced to the two networks and the outputs are
presented on the third and fourth rows, for different level of initial noise (columns).
5
Summary and Final Remarks
Our paper aims at gaining a better understanding of the functionality of deep networks. Whereas the
operation of the bottom (low level processing of the signals) and the top (fully supervised) stages are
well understood, an understanding of the rationale of multiple intermediate stages and the tradeoffs
between competing architectures is lacking. The model we study is simplified both in the task,
suppressing noise, and its learning rule (pseudo-inverse). With respect to the first, we believe that
changing the noise model to the more realistic variability inherent in objects will exhibit the same
qualitative behaviors. With respect to the learning rule, the pseudo-inverse is close to SVM rule in the
regime we work, so we believe that is a good tradeoff between realism and tractability. Thus, although
the unavoidable simplicity of our model, we believe its analysis yields important insights which will
likely carry over to the more realistic domains of deep networks studied in ML and neuroscience.
Effects of sparseness Our results show that the performance of the network is improved as the
sparsity of the representation increases. In the extreme case of f ! 0, perfect suppression of noise
occurs already after a single processing layer. Cortical sensory representations exhibit only moderate
sparsity levels, f ? 0.1. Computational considerations of robustness to ?representational noise?
at each layer will also limit the value of f . Thus, deep architectures may be necessary for good
performance at realistic moderate levels of sparsity (or for dense representations).
Infinitely wide shallow architectures: A central result of our model is that a finite deep network
may perform better than a network with a single processing layer of infinite width. An infinitely wide
shallow network has been studied in the past (e.g., [4]). In principle, an infinitely wide network, even
with random projection weights, may serve as a universal approximate, allowing for yielding readout
performance as good as or superior to any finite deep network. This however requires a complex
training of the readout weights. Our relatively simple readout weights are incapable of extracting this
information from the infinite, shallow architecture. Similar behavior is seen with simpler readout
weights, the Hebbian weights as well as with more complex readout generated by training the readout
weights using SVMs with noiseless patterns or noisy inputs [1]. Thus, our results hold qualitatively
for a broad range of plausible readout learning algorithms (such as Hebb, PI, SVM) but not for
arbitrarily complex search that finds the optimal readout weights.
8
Acknowledgements
This work was partially supported by IARPA (contract #D16PC00002), Gatsby Charitable Foundation,
and Simons Foundation SCGB grant.
References
[1] Baktash Babadi and Haim Sompolinsky. Sparseness and Expansion in Sensory Representations.
Neuron, 83(5):1213?1226, September 2014.
[2] Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning
from examples without local minima. 2(1):53?58, 1989.
[3] Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, and Andrew Y Ng. Unsupervised
learning models of primary cortical receptive fields and receptive field plasticity. ADVANCES
IN NEURAL . . . , pages 1971?1979, 2011.
[4] Y Cho and L K Saul. Large-margin classification in infinite neural networks. Neural Computation, 22(10):2678?2697, 2010.
[5] William W Cohen, Andrew McCallum, and Sam T Roweis, editors. Extracting and Composing
Robust Features with Denoising Autoencoders. ACM, 2008.
[6] E Domany, W Kinzel, and R Meir. Layered neural networks. Journal of Physics A: Mathematical
and General, 22(12):2081?2102, June 1989.
[7] G E Hinton and R R Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks.
science, 313(5786):504?507, July 2006.
[8] I Kanter and Haim Sompolinsky. Associative recall of memory without errors. Physical Review
A, 35(1):380?392, 1987.
[9] Honglak Lee, Chaitanya Ekanadham, and Andrew Y Ng. Sparse deep belief net model for
visual area V2. Advances in neural information . . . , pages 873?880, 2008.
[10] L Personnaz, I Guyon, and G Dreyfus. Information storage and retrieval in spin-glass like neural
networks. Journal de Physique Lettres, 46(8):359?365, April 1985.
[11] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear
dynamics of learning in deep linear neural networks. arXiv.org, December 2013.
[12] Paul Smolensky. Information Processing in Dynamical Systems: Foundations of Harmony
Theory. February 1986.
[13] Glenn C Turner, Maxim Bazhenov, and Gilles Laurent. Olfactory Representations by Drosophila
Mushroom Body Neurons. Journal of Neurophysiology, 99(2):734?746, February 2008.
[14] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a
Local Denoising Criterion. The Journal of Machine Learning Research, 11:3371?3408, March
2010.
9
| 6330 |@word neurophysiology:1 version:5 simulation:4 propagate:2 q1:2 thereby:1 solid:1 carry:2 initial:14 series:1 denoting:1 interestingly:1 suppressing:1 kurt:1 past:2 comparing:1 nt:7 mushroom:1 dx:1 numerical:1 subsequent:1 realistic:3 plasticity:1 update:1 n0:11 implying:1 mccallum:1 realism:1 completeness:1 node:1 contribute:1 org:1 simpler:2 mathematical:1 along:3 become:1 bazhenov:1 qualitative:2 baldi:1 olfactory:1 introduce:1 expected:1 indeed:1 behavior:6 brain:1 salakhutdinov:1 decreasing:1 increasing:3 becomes:2 spain:1 project:2 underlying:2 notation:1 israel:2 minimizes:1 guarantee:1 pseudo:6 ro:4 unit:1 grant:1 t1:4 before:1 understood:1 local:2 limit:9 despite:2 analyzing:2 laurent:1 firing:2 fluctuation:1 studied:3 suggests:1 challenging:1 shaded:1 limited:3 range:3 averaged:1 responsible:1 recursive:1 practice:1 digit:3 procedure:2 suresh:1 area:2 universal:1 maneesh:1 significantly:1 projection:2 pre:1 onto:1 close:4 layered:1 noising:1 storage:1 center:4 layout:3 attention:2 simplicity:5 rule:10 attraction:5 insight:1 racah:2 population:1 variation:1 hierarchy:2 exact:4 homogeneous:2 us:1 harvard:1 recognition:1 predicts:1 database:1 bottom:1 readout:33 sompolinsky:3 decrease:2 vanishes:1 moderately:1 dynamic:8 neglected:1 ultimately:1 trained:3 depend:1 solving:1 serve:2 upon:1 easily:1 various:1 retrieves:1 represented:1 train:2 stacked:1 fast:1 describe:2 kanter:1 supplementary:4 larger:2 plausible:1 distortion:1 say:2 ability:2 statistic:1 favor:1 noisy:12 final:2 associative:1 advantage:1 sequence:1 analytical:2 net:1 reconstruction:1 realization:2 rapidly:1 representational:1 roweis:1 recipe:1 convergence:5 cluster:13 diverges:2 assessing:1 produce:1 perfect:4 object:1 derive:1 andrew:5 recurrent:1 propagating:1 ac:1 completion:1 illustrate:1 ij:3 received:1 progress:1 eq:5 predicted:1 come:1 implies:2 larochelle:1 radius:1 correct:1 functionality:1 saxe:2 separatrix:2 material:4 require:1 clustered:4 drosophila:1 biological:1 hold:5 around:1 exp:3 great:1 visualize:1 m0:22 vary:1 fh:2 purpose:1 lose:1 label:5 harmony:1 wl:3 correctness:1 weighted:1 gaussian:1 aim:1 hil:5 derived:2 focus:1 june:1 bernoulli:2 rank:1 contrast:1 suppression:4 mro:3 glass:1 ganguli:1 lj:1 typically:1 relation:4 bhand:1 reproduce:2 overall:4 classification:9 arg:1 pascal:1 denoted:2 development:1 spatial:1 homogenous:1 field:14 construct:2 equal:4 ng:2 identical:1 represents:1 broad:1 look:1 unsupervised:2 yoshua:1 stimulus:7 quantitatively:1 inherent:1 few:2 randomly:1 composed:4 replaced:1 antecedent:1 attractor:1 william:1 olfaction:1 interest:2 physique:1 extreme:1 nl:10 yielding:3 allocating:1 necessary:1 chaitanya:1 desired:2 circle:1 theoretical:3 minimal:1 increased:1 column:1 tractability:2 ekanadham:1 deviation:1 uniform:2 front:1 reported:1 stored:1 corrupted:2 cho:1 eopt:1 fundamental:1 huji:1 retain:1 contract:1 physic:3 lee:1 quickly:1 connectivity:1 central:5 unavoidable:1 possibly:1 slowly:2 audition:1 account:1 de:2 depends:2 analyze:1 red:1 simon:1 contribution:6 il:1 spin:1 variance:7 yield:2 handwritten:2 vincent:1 iterated:1 produced:1 iid:2 trajectory:2 corruption:1 synaptic:3 against:1 rbms:1 james:1 propagated:1 dataset:2 recall:1 color:1 improves:1 dimensionality:1 organized:3 supervised:1 response:5 improved:1 april:1 arranged:2 evaluated:1 shrink:1 though:1 stage:4 autoencoders:4 correlation:6 hand:1 receives:4 nonlinear:2 propagation:4 widespread:1 believe:3 effect:4 analytically:1 hence:2 mudur:1 during:1 width:4 criterion:1 complete:2 performs:1 bring:1 ranging:1 image:2 consideration:2 novel:1 recently:1 dreyfus:1 superior:1 kinzel:1 physical:2 hugo:1 cohen:1 m1:4 numerically:1 refer:1 isabelle:1 honglak:1 tuning:1 mathematics:1 similarly:1 dot:1 stable:1 entail:1 impressive:1 cortex:1 f0:1 base:1 recent:1 showed:1 moderate:2 incapable:1 binary:5 success:2 arbitrarily:1 seen:5 minimum:2 preceding:1 converge:3 signal:11 dashed:3 july:1 multiple:4 reduces:2 hebbian:2 long:1 retrieval:1 divided:2 equally:2 schematic:1 prediction:2 regression:1 noiseless:11 vision:1 arxiv:1 represent:2 addition:1 whereas:1 allocated:1 induced:1 member:1 december:1 extracting:2 near:2 feedforward:6 intermediate:8 bengio:1 architecture:25 competing:1 opposite:1 silent:1 domany:1 tradeoff:2 utility:2 remark:1 deep:30 generally:3 iterating:1 detailed:1 useful:1 prepared:1 svms:1 mcclelland:1 simplest:1 sl:2 meir:1 neuroscience:2 deteriorates:1 correctly:2 discrete:1 key:1 threshold:2 nevertheless:1 drawn:1 changing:1 downstream:1 concreteness:2 year:1 fraction:1 sum:1 inverse:7 fourth:1 topologically:1 reasonable:1 guyon:1 scaling:1 layer:83 haim:3 babadi:1 activity:7 constraint:1 constrain:1 x2:1 lopt:6 min:1 slj:1 performing:1 relatively:2 march:1 across:2 describes:1 remain:1 suppressed:1 sam:1 shallow:4 projecting:1 restricted:1 equation:14 resource:3 previously:1 remains:2 describing:1 turn:1 fail:1 merit:1 fed:1 available:1 operation:1 v2:1 enforce:1 pierre:2 alternative:1 robustness:1 original:4 denotes:3 top:4 ensure:1 assumes:1 february:2 personnaz:1 question:1 already:1 occurs:1 receptive:2 primary:1 dependence:1 antoine:1 exhibit:2 september:1 capacity:3 lajoie:1 bipin:1 mail:1 unstable:2 reason:1 code:2 index:2 manzagol:1 hebrew:2 ql:9 negative:1 rise:1 design:1 proper:1 boltzmann:1 perform:8 allowing:2 shallower:2 vertical:1 neuron:36 gilles:1 finite:8 defining:1 hinton:1 somatosensation:1 incorporated:1 variability:1 introduced:1 namely:3 required:1 learned:2 barcelona:1 nip:1 able:1 dynamical:3 pattern:15 below:1 regime:1 sparsity:12 challenge:1 smolensky:1 including:1 gaining:1 memory:1 belief:1 power:1 overlap:21 natural:1 solvable:2 recursion:7 turner:1 representing:1 ready:1 fro:5 autoencoder:2 review:1 understanding:6 acknowledgement:1 autoencode:1 asymptotic:1 lacking:2 beside:1 fully:2 rationale:1 interesting:1 limitation:1 prototypical:1 foundation:3 elsc:2 basin:6 sufficient:1 s0:3 thresholding:3 principle:1 charitable:1 editor:1 storing:1 uncorrelated:2 pi:3 row:2 summary:1 placed:1 last:3 supported:1 drastically:1 bias:2 allow:1 deeper:2 institute:2 wide:7 template:9 saul:1 correspondingly:1 emerge:1 sparse:1 benefit:1 curve:2 depth:6 cortical:4 world:2 boundary:1 calculated:1 sensory:7 made:1 qualitatively:1 sli:4 simplified:1 approximate:2 ml:11 global:1 active:1 reveals:1 pseudoinverse:1 nopt:2 harm:1 scgb:1 assumed:1 surya:1 search:1 s0i:1 glenn:1 terminate:1 robust:1 composing:1 hornik:1 expansion:3 cl:1 complex:3 constructing:1 domain:1 da:2 dense:2 noise:22 paul:1 iarpa:1 x1:1 body:1 tl:9 m20:1 depicts:2 hebb:1 slow:1 gatsby:1 fails:1 position:1 explicit:1 obeying:1 vanish:1 third:2 load:10 specific:3 svm:2 mnist:2 sequential:1 maxim:1 dm0:1 sparseness:2 margin:1 gap:1 explore:2 infinitely:7 likely:1 visual:2 partially:1 acm:1 lth:1 goal:1 identity:1 sized:1 considerable:3 change:2 determined:1 except:2 reducing:3 infinite:6 averaging:1 denoising:3 principal:1 total:5 experimental:2 jonathan:2 evaluate:2 |
5,893 | 6,331 | Robustness of classifiers:
from adversarial to random noise
Alhussein Fawzi?, Seyed-Mohsen Moosavi-Dezfooli?, Pascal Frossard
?cole Polytechnique F?d?rale de Lausanne
Lausanne, Switzerland
{alhussein.fawzi, seyed.moosavi, pascal.frossard} at epfl.ch
Abstract
Several recent works have shown that state-of-the-art classifiers are vulnerable to
worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand,
it has been empirically observed that these same classifiers are relatively robust
to random noise. In this paper, we propose to study a semi-random noise regime
that generalizes both the random and worst-case noise regimes. We propose
the first quantitative analysis of the robustness of nonlinear classifiers in this
general noise regime. We establish precise theoretical bounds on the robustness of
classifiers in this general regime, which depend on the curvature of the classifier?s
decision boundary. Our bounds confirm and quantify the empirical observations that
classifiers satisfying curvature constraints are robust to random noise. Moreover,
we quantify the robustness of classifiers in terms of the subspace dimension in
the semi-random noise regime, and show that our bounds remarkably interpolate
between the worst-case and random noise regimes. We perform experiments and
show that the derived bounds provide very accurate estimates when applied to
various state-of-the-art deep neural networks and datasets. This result suggests
bounds on the curvature of the classifiers? decision boundaries that we support
experimentally, and more generally offers important insights onto the geometry of
high dimensional classification problems.
1
Introduction
State-of-the-art classifiers, especially deep networks, have shown impressive classification performance on many challenging benchmarks in visual tasks [9] and speech processing [7]. An equally
important property of a classifier that is often overlooked is its robustness in noisy regimes, when
data samples are perturbed by noise. The robustness of a classifier is especially fundamental when
it is deployed in real-world, uncontrolled, and possibly hostile environments. In these cases, it
is crucial that classifiers exhibit good robustness properties. In other words, a sufficiently small
perturbation of a datapoint should ideally not result in altering the estimated label of a classifier.
State-of-the-art deep neural networks have recently been shown to be very unstable to worst-case
perturbations of the data (or equivalently, adversarial perturbations) [17]. In particular, despite
the excellent classification performances of these classifiers, well-sought perturbations of the data
can easily cause misclassification, since data points often lie very close to the decision boundary
of the classifier. Despite the importance of this result, the worst-case noise regime that is studied
in [17] only represents a very specific type of noise. It furthermore requires the full knowledge of the
classification model, which may be a hard assumption in practice.
In this paper, we precisely quantify the robustness of nonlinear classifiers in two practical noise
regimes, namely random and semi-random noise regimes. In the random noise regime, datapoints are
?
The first two authors contributed equally to this work.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
perturbed by noise with random direction in the input space. The semi-random regime generalizes this
model to random subspaces of arbitrary dimension, where a worst-case perturbation is sought within
the subspace. In both cases, we derive bounds that precisely describe the robustness of classifiers in
function of the curvature of the decision boundary. We summarize our contributions as follows:
?
? In the random regime, we show that the robustness of classifiers behaves as d times the
distance from the datapoint to the classification boundary (where d denotes the dimension
of the data) provided the curvature of the decision boundary is sufficiently small. This
result highlights the blessing of dimensionality for classification tasks, as it implies that
robustness to random noise in high dimensional classification problems can be achieved,
even at datapoints that are very close to the decision boundary.
? This quantification notably extends to the
p general semi-random regime, where we show
that the robustness precisely behaves as d/m times the distance to boundary, with m the
dimension of the subspace. This result shows in particular that, even when m is chosen as a
small fraction of the dimension d, it is still possible to find small perturbations that cause
data misclassification.
? We empirically show that our theoretical estimates are very accurately satisfied by stateof-the-art deep neural networks on various sets of data. This in turn suggests quantitative
insights on the curvature of the decision boundary that we support experimentally through
the visualization and estimation on two-dimensional sections of the boundary.
The robustness of classifiers to noise has been the subject of intense research. The robustness properties of SVM classifiers have been studied in [19] for example, and robust optimization approaches for
constructing robust classifiers have been proposed to minimize the worst possible empirical error
under noise disturbance [1, 10]. More recently, following the recent results on the instability of
deep neural networks to worst-case perturbations [17], several works have provided explanations of
the phenomenon [3, 5, 14, 18], and designed more robust networks [6, 8, 20, 13, 15, 12]. In [18],
the authors provide an interesting empirical analysis of the adversarial instability, and show that
adversarial examples are not isolated points, but rather occupy dense regions of the pixel space. In
[4], state-of-the-art classifiers are shown to be vulnerable to geometrically constrained adversarial
examples. Our work differs from these works, as we provide a theoretical study of the robustness of
classifiers to random and semi-random noise in terms of the robustness to adversarial noise. In [3], a
formal relation between the robustness to random noise, and the worst-case robustness is established
in the case of linear classifiers. Our result therefore generalizes [3] in many aspects, as we study
general nonlinear classifiers, and robustness to semi-random noise. Finally, it should be noted that
the authors in [5] conjecture that the ?high linearity? of classification models explains their instability
to adversarial perturbations. The objective and approach we follow here is however different, as we
study theoretical relations between the robustness to random, semi-random and adversarial noise.
2
Definitions and notations
Let f : Rd ? RL be an L-class classifier. Given a datapoint x0 ? Rd , the estimated label is obtained
? 0 ) = argmax fk (x0 ), where fk (x) is the k th component of f (x) that corresponds to the k th
by k(x
k
class. Let S be an arbitrary subspace of Rd of dimension m. Here, we are interested in quantifying the
robustness of f with respect to different noise regimes. To do so, we define r ?S to be the perturbation
in S of minimal norm that is required to change the estimated label of f at x0 .2
? 0 + r) 6= k(x
? 0 ).
r ?S (x0 ) = argmin krk2 s.t. k(x
r?S
(1)
Note that r ?S (x0 ) can be equivalently written
? 0 ) : fk (x0 + r) ? f?
r ?S (x0 ) = argmin krk2 s.t. ?k 6= k(x
k(x0 ) (x0 + r).
(2)
r?S
When S = Rd , r ? (x0 ) := r ?Rd (x0 ) is the adversarial (or worst-case) perturbation defined in [17],
which corresponds to the (unconstrained) perturbation of minimal norm that changes the label of the
2
Perturbation vectors sending a datapoint exactly to the boundary are assumed to change the estimated label
of the classifier.
2
datapoint x0 . In other words, kr ? (x0 )k2 corresponds to the minimal distance from x0 to the classifier
boundary. In the case where S ? Rd , only perturbations along S are allowed. The robustness of f at
x0 along S is naturally measured by the norm kr ?S (x0 )k2 . Different choices for S permit to study
the robustness of f in two different regimes:
? Random noise regime: This corresponds to the case where S is a one-dimensional subspace
(m = 1) with direction v, where v is a random vector sampled uniformly from the unit
sphere Sd?1 . Writing it explicitly, we study in this regime the robustness quantity defined
? 0 ), fk (x0 + tv) ? f?
by mint |t| s.t. ?k 6= k(x
k(x0 ) (x0 + tv), where v is a vector sampled
d?1
uniformly at random from the unit sphere S .
? Semi-random noise regime: In this case, the subspace S is chosen randomly, but can be of
arbitrary dimension m.3 We use the semi-random terminology as the subspace is chosen
randomly, and the smallest vector that causes misclassification is then sought in the subspace.
It should be noted that the random noise regime is a special case of the semi-random regime
with a subspace of dimension m = 1. We differentiate nevertheless between these two
regimes for clarity.
In the remainder of the paper, the goal is to establish relations between the robustness in the random
and semi-random regimes on the one hand, and the robustness to adversarial perturbations kr ? (x0 )k2
on the other hand. We recall that the latter quantity captures the distance from x0 to the classifier
boundary, and is therefore a key quantity in the analysis of robustness.
? 0 ). To simplify the notation,
In the following analysis, we fix x0 to be a datapoint classified as k(x
we remove the explicit dependence on x0 in our notations (e.g., we use r ?S instead of r ?S (x0 ) and k?
? 0 )), and it should be implicitly understood that all our quantities pertain to the fixed
instead of k(x
datapoint x0 .
3
Robustness of affine classifiers
We first assume that f is an affine classifier, i.e., f (x) = W> x + b for a given W = [w1 . . . wL ]
and b ? RL .
The following result shows a precise relation between the robustness to semi-random noise, kr ?S k2
and the robustness to adversarial perturbations, kr ? k2 .
Theorem 1. Let ? > 0, S be a random m-dimensional subspace of Rd , and f be a L-class affine
classifier. Let
!?1
r
ln(1/?) 2 ln(1/?)
?1 (m, ?) = 1 + 2
+
,
(3)
m
m
?1
q
?2 (m, ?) = max (1/e)? 2/m , 1 ? 2(1 ? ? 2/m )
.
(4)
The following inequalities hold between the robustness to semi-random noise kr ?S k2 , and the robustness to adversarial perturbations kr ? k2 :
r
r
p
p
d ?
d ?
?
?1 (m, ?)
kr k2 ? kr S k2 ? ?2 (m, ?)
kr k2 ,
(5)
m
m
with probability exceeding 1 ? 2(L + 1)?.
The proof can be found in the appendix. Our upper and lower bounds depend on the functions
?1 (m, ?) and ?2 (m, ?) that control the inequality constants (for m, ? fixed). It should be noted that
?1 (m, ?) and ?2 (m, ?) are independent of the data dimension d. Fig. 1 shows the plots of ?1 (m, ?)
and ?2 (m, ?) as functions of m, for a fixed ?. It should be noted that for sufficiently large m, ?1 (m, ?)
and ?2 (m, ?) are very close to 1 (e.g., ?1 (m, ?) and ?2 (m, ?) belong to the interval [0.8, 1.3] for
m ? 250 in the settings of Fig. 1). The interval [?1 (m, ?), ?2 (m, ?)] is however (unavoidably) larger
when m = 1.
3
A random subspace is defined as the span of m independent vectors drawn uniformly at random from Sd?1 .
3
The result in Theorem 1 shows that in the random and 104
? (m, ? )
semi-random noise regimes, the robustness
to noise is
? (m, ? )
p
10 3
precisely related to kr ? k2 by a factor of d/m. Specifically, in the random noise regime (m = 1), the mag- 102
nitude of the noise
? required to misclassify the datapoint 101
behaves as ?( dkr ? k2 ) with high probability, with constants in the interval [?1 (1, ?), ?2 (1, ?)]. Our results there- 100
fore show that, in high dimensional classification set- -1
tings, affine classifiers can be robust to random noise, 10
even if the datapoint lies very closely to the decision 10-2 0
200
400
600
800
1000
m
boundary (i.e., kr ? k2 is small). In the semi-random noise
regime with
pm sufficiently large (e.g., m ? 250), we have
kr ?S k2 ? d/mkr ? k2 with high probability, as the con- Figure 1: ?1 (m, ?) and ?2 (m, ?) in funcstants ?1 (m, ?) ? ?2 (m, ?) ? 1 for sufficiently large m. tion of m [? = 0.05] .
Our bounds therefore ?interpolate?
? between the random
noise regime, which behaves as dkr ? k2 , and the worst-case noise kr ? k2 . More importantly, the
square root dependence is also notable here, as it shows that the semi-random robustness can remain
small even in regimes where m is chosen to be a very small fraction of d. For example, choosing a
small subspace of dimension m = 0.01d results in semi-random robustness of 10kr ? k2 with high
probability, which might still not be perceptible in complex visual tasks. Hence, for semi-random
noise that is mostly random and only mildly adversarial (i.e., the subspace dimension is small), affine
classifiers remain vulnerable to such noise.
1
2
4
4.1
Robustness of general classifiers
Curvature of the decision boundary
We now consider the general case where f is a nonlinear classifier. We derive relations between
the random and semi-random robustness kr ?S k2 and worst-case robustness kr ? k2 using properties
of the classifier?s boundary. Let i and j be two arbitrary classes; we define the pairwise boundary
Bi,j as the boundary of the binary classifier where only classes i and j are considered. Formally, the
decision boundary is given by Bi,j := {x ? Rd : fi (x) ? fj (x) = 0}. The boundary Bi,j separates
between two regions of Rd , namely Ri and Rj , where the estimated label of the binary classifier is
respectively i and j.
We assume for the purpose of this analysis that the boundary Bi,j is smooth. We are now interested
in the geometric properties of the boundary, namely its curvature. Many notions of curvature can
be defined on hypersurfaces [11]. In the simple case of a curve in a two-dimensional space, the
curvature is defined as the inverse of the radius of the so-called oscullating circle. One way to define
curvature for high-dimensional hypersurfaces is by taking normal sections of the hypersurface, and
measuring the curvature of the resulting planar curve (see Fig. 2). We however introduce a notion of
curvature that is specifically suited to the analysis of the decision boundary of a classifier. Informally,
our curvature captures the global bending of the decision boundary by inscribing balls in the regions
separated by the decision boundary. For a given p ? Bi,j , we define qi k j (p) to be the radius of the
largest open ball included in the region Ri that intersects with Bi,j at p; i.e.,
qi k j (p) = sup {kz ? pk2 : B(z, kz ? pk2 ) ? Ri } ,
(6)
z?Rd
where B(z, kz ? pk2 ) is the open ball in Rd of center z and radius kz ? pk2 . An illustration
of this quantity in two dimensions is provided in Fig. 2 (b). It is not hard to see that any ball
B(z ? , kz ? ? pk2 ) centered in z ? and included in Ri will have its tangent space at p coincide with
the tangent of the decision boundary at the same point.
It should further be noted that the definition in Eq. (6) is not symmetric in i and j. We therefore
define the following symmetric quantity qi,j (p), where the worst-case ball inscribed in any of the
two regions Ri and Rj is considered:
qi,j (p) = min qi k j (p), qj k i (p) .
4
n
p
p1
u
?
Tp Bj
R2
q2
1 (p2 )
B1,2
q1
2 (p1 )
p2
U
(a)
R1
(b)
Figure 2: (a) Normal section of the boundary Bi,j with respect to plane U = span(n, u), where n is
the normal to the boundary at p, and u is an arbitrary in the tangent space Tp (Bi,j ). (b) Illustration
of the quantities introduced for the definition of the curvature of the decision boundary.
To measure the global curvature, the worst-case radius is taken over all points on the decision
boundary, i.e., q(Bi,j ) = inf p?Bi,j qi,j (p). The curvature ?(Bi,j ) is then defined as the inverse of
the worst-case radius: ?(Bi,j ) = 1/q(Bi,j ).
In the case of affine classifiers, we have ?(Bi,j ) = 0, as it is possible to inscribe balls of infinite
radius inside each region of the space. When the classification boundary is a union of (sufficiently
distant) spheres with equal radius R, the curvature ?(Bi,j ) = 1/R. In general, the quantity ?(Bi,j )
provides an intuitive way of describing the nonlinearity of the decision boundary by fitting balls
inside the classification regions.
4.2
Robustness to random and semi-random noise
We now establish bounds on the robustness to random and semi-random noise in the binary classifi? 0 ). We first study the binary classification
cation case. Let x0 be a datapoint classified as k? = k(x
? are considered. To simplify the notation,
?
problem, where only classes k and k ? {1, . . . , L}\{k}
? In the case of the binary
we let Bk := Bk,k? be the decision boundary between classes k and k.
classification problem where classes k and k? are considered, the semi-random perturbation defined in
Eq. (2) can be re-written as follows:
r kS = argmin krk2 s.t. fk (x0 + r) ? fk? (x0 + r).
(7)
r?S
The worst case perturbation (obtained with S = Rd ) is denoted by r k . It should be noted that the
global quantities r ?S and r ? are obtained from r kS and r k by taking the vectors with minimum norm
over all classes k.
The following result gives upper and lower bounds on the ratio
?
the boundary separating class k and k.
kr k
S k2
kr k k2
in function of the curvature of
Theorem 2. Let S be a random m-dimensional subspace of Rd . Let ? := ?(Bk ). Assuming that the
curvature satisfies
??
C
m
,
?2 (m, ?)kr k k2 d
(8)
the following inequality holds between the semi-random robustness kr kS k2 and the adversarial
robustness kr k k2 :
r
r
d p
d
kr kS k2
d p
d
k
k
1 ? C1 kr k2 ??2
?1
?
? 1 + C2 kr k2 ??2
?2
(9)
m
m
kr k k2
m
m
with probability larger than 1 ? 4?. We recall that ?1 = ?1 (m, ?) and ?2 = ?2 (m, ?) are defined in
Eq. (3, 4). The constants are C = 0.2, C1 = 0.625, C2 = 2.25.
The proof can be found in the appendix. This result shows that the bounds relating the robustness to
random and semi-random noise to the worst-case robustness can be extended to nonlinear classifiers,
5
provided the curvature of the boundary ?(Bk ) is sufficiently small. In the case of linear classifiers,
we have ?(Bk ) = 0, and we recover the result for affine classifiers from Theorem 1.
To extend this result to multi-class classification, special care has to be taken. In particular, if k
? kr k k2 can be very large and the previous curvature
denotes a class that has no boundary with class k,
condition is not satisfied. It is therefore crucial to exclude such classes that have no boundary in
? or more generally, boundaries that are far from class k.
? We define the set A of
common with class k,
k
excluded classes k where kr k2 is large
r
p
d ?
k
A = {k : kr k2 ? 1.45 ?2 (m, ?)
kr k2 }.
(10)
m
Note that A is independent of S, and depends only on d, m and ?. Moreover, the constants in (10)
were chosen for simplicity of exposition.
Assuming a curvature constraint only on the close enough classes, the following result establishes a
simplified relation between kr ?S k2 and kr ? k2 .
Corollary 1. Let S be a random m-dimensional subspace of Rd . Assume that, for all k ?
/ A, the
curvature condition in Eq. (8) holds. Then, we have
r
r
p
p
d ?
d ?
?
0.875 ?1 (m, ?)
kr k2 ? kr S k2 ? 1.45 ?2 (m, ?)
kr k2
(11)
m
m
with probability larger than 1 ? 4(L + 2)?.
Under the curvature condition in (8) on the boundaries between k? and classes in Ac , our result
shows that the robustness to random and semi-random noise exhibits the same behavior that has
been observed earlier for linear classifiers in Theorem
1. In particular, kr ?S k2 is precisely related to
p
?
d
the adversarial
robustness kr k2 by a factor of
/m. In the random regime (m = 1), this factor
?
becomes d, and shows that in high dimensional classification problems, classifiers with sufficiently
flat boundaries are much more
p robust to random noise than to adversarial noise. However, in the
semi-random, the factor is d/m and shows that robustness to semi-random noise might not be
achieved even if m is chosen to be a tiny fraction of d. In other words, if a classifier is highly
vulnerable to adversarial perturbations, then it is also vulnerable to noise that is overwhelmingly
random and only mildly adversarial.
It is important to note that the curvature condition in Corollary 1 is not an assumption on the curvature
of the global decision boundary, but rather an assumption on the decision boundaries between pairs
of classes. The distinction here is significant, as junction points where two decision boundaries meet
might actually have a very large (or infinite) curvature (even in linear classification settings), and the
curvature condition in Corollary 1 typically does not hold for this global curvature definition. We
refer to our experimental section for a visualization of this phenomenon.
5
Experiments
We now evaluate the robustness of different image classifiers to random and semi-random perturbations, and assess the accuracy of our bounds on various datasets and state-of-the-art classifiers.
?
Specifically, our theoretical results show
pthat the? robustness kr S (x)k2 of classifiers satisfying the
d
curvature property precisely behaves as
/mkr (x)k2 . We first check the accuracy of these results
in different classification settings. For a given classifier f and subspace dimension m, we define
p
P
kr ?
1
S (x)k2
?(f ; m) = m/d |D|
x?D kr ? (x)k2 , where S is chosen randomly for each sample x and D dep
notes the test set. This quantity provides indication to the accuracy of our d/mkr ? (x)k2 estimate of
the robustness, and should ideally be equal to 1 (for sufficiently large m). Since ? is a random quantity
(because of S), we report both its mean and standard deviation for different networks in Table 1.
It should be noted that finding kr ?S k2 and kr ? k2 involves solving the optimization problem in (1).
We have used a similar approach to [13] to find subspace minimal perturbations. For each network,
we estimate the expectation by averaging ?(f ; m) on 1000 random samples, with S also chosen
randomly for each sample. Observe that ? is suprisingly close to 1, even when m is a small fraction
of d. This shows that our quantitative analysis provide very accurate estimates of the robustness to
semi-random noise. We visualize the robustness to random noise, semi-random noise (with m = 10)
6
Table 1: ?(f ; m) for different classifiers f and different subspace dimensions m. The VGG-F and
VGG-19 are respectively introduced in [2, 16].
m/d
Classifier
LeNet (MNIST)
LeNet (CIFAR-10)
VGG-F (ImageNet)
VGG-19 (ImageNet)
(a)
1/4
1/16
1/36
1/64
1/100
1.00 ? 0.06
1.01 ? 0.03
1.00 ? 0.01
1.00 ? 0.01
1.01 ? 0.12
1.02 ? 0.07
1.02 ? 0.02
1.02 ? 0.03
1.03 ? 0.20
1.04 ? 0.10
1.03 ? 0.04
1.02 ? 0.05
1.01 ? 0.26
1.06 ? 0.14
1.03 ? 0.05
1.03 ? 0.06
1.05 ? 0.34
1.10 ? 0.19
1.04 ? 0.06
1.04 ? 0.08
(b)
(c)
(d)
Figure 3: (a) Original image classified as ?Cauliflower?. Fooling perturbations for VGG-F network:
(b) Random noise, (c) Semi-random perturbation with m = 10, (d) Worst-case perturbation, all
wrongly classified as ?Artichoke?.
and worst-case
? perturbations on a sample image in Fig. 3. While random noise is clearly perceptible
due to the d ? 400 factor, semi-random noise becomes much less perceptible even with a relatively
?
small value of m = 10, thanks to the 1/ m factor that attenuates the required noise to misclassify
the datapoint. It should be noted that the robustness of neural networks to adversarial perturbations
has previously been observed empirically in [17], but we provide here a quantitative and generic
explanation for this phenomenon. The high accuracy of our bounds for different state-of-the-art
classifiers, and different datasets suggest that the decision boundaries of these classifiers have limited
curvature ?(Bk ), as this is a key assumption of our theoretical findings. To support the validity of this
curvature hypothesis in practice, we visualize two-dimensional sections of the classifiers? boundary
in Fig. 4 in three different settings. Note that we have opted here for a visualization strategy rather
than the numerical estimation of ?(B), as the latter quantity is difficult to approximate in practice in
high dimensional problems. In Fig. 4, x0 is chosen randomly from the test set for each data set, and
the decision boundaries are shown in the plane spanned by r ? and r ?S , where S is a random direction
(i.e., m = 1). Different colors on the boundary correspond to boundaries with different classes. It
can be observed that the curvature of the boundary is very small except at ?junction? points where
the boundary of two different classes intersect. Our curvature assumption, which only assumes a
? 0 ) and k (but not on the
bound on the curvature of the decision boundary between pairs of classes k(x
global decision boundary that contains junctions with high curvature) is therefore adequate to the
decision boundaries of state-of-the-art classifiers according to Fig. 4. Interestingly, the assumption in
Corollary 1 is satisfied by taking ? to be an empirical estimate of the curvature of the planar curves in
Fig. 4 (a) for the dimension of the subspace being a very small fraction of d; e.g., m = 10?3 d. While
not reflecting the curvature ?(Bk ) that drives the assumption of our theoretical analysis, this result
still seems to suggest that the curvature assumption holds in practice.
We now show a simple demonstration of the vulnerability of classifiers to semi-random noise in Fig. 5,
where a structured message is hidden in the image and causes data misclassification. Specifically, we
consider S to be the span of random translated and scaled versions of words ?NIPS?, ?SPAIN? and
?2016? in an image, such that bd/mc = 228. The resulting perturbations in the subspace are therefore
linear combinations of these words with different intensities.4 The perturbed image x0 + r ?S shown in
4
This example departs somehow from the theoretical framework of this paper, where random subspaces
were considered. However, this empirical example suggests that the theoretical findings in this paper seem to
approximately hold when the subspace S have statistics that are close to a random subspace.
7
2.5
12.5
0.5
2
10
1.5
1
0.25
B1
B1
7.5
0.5
0
B2
B1
B2
0
5
0.25
2.5
-0.5
B2
-0.5
-1
-1.5
x0
0
0.75
-2
x0
-2.5
-100 -75
-50
-25
0
25
50
75
100
125
150
-2.5
-150
(a) VGG-F (ImageNet)
-100
-50
0
50
100
(b) LeNet (CIFAR)
150
200
-1
x0
-5
-2.5
0
2.5
5
7.5
(c) LeNet (MNIST)
Figure 4: Boundaries of three classifiers near randomly chosen samples. Axes are normalized by the
corresponding kr ? k2 as our assumption in the theoretical bound depends on the product of kr ? k2 ?.
Note the difference in range between x and y axes. Note also that the range of horizontal axis in (c)
is much smaller than the other two, hence the illustrated boundary is more curved.
(a) Image of a ?Potflower?
(b) Perturbation
(c) Classified as ?Pineapple?
Figure 5: A fooling hidden message. S is the span of random translations and scales of the words
?NIPS?, ?SPAIN?, and ?2016?.
Fig. 5 (c) is clearly indistinguishable from Fig. 5 (a). This shows that imperceptibly small structured
messages can be added to an image causing data misclassification.
6
Conclusion
In this work, we precisely characterized the robustness of classifiers in a novel semi-random noise
regime that generalizes the random noise regime. Specifically, our bounds relate the robustness
in this regime to the robustness to adversarial perturbations. Our bounds depend on the curvature
of the decision boundary, the data dimension, and the dimension of the subspace to which the
perturbation belongs. Our results show, in particular, that when the decision boundary has a small
curvature, classifiers are robust to random noise in high dimensional classification problems (even if
the robustness to adversarial perturbations is relatively small). Moreover, for semi-random noise that
is mostly random and only mildly adversarial (i.e., the subspace dimension is small), our results show
that state-of-the-art classifiers remain vulnerable to such perturbations. To improve the robustness to
semi-random noise, our analysis encourages to impose geometric constraints on the curvature of the
decision boundary, as we have shown the existence of an intimate relation between the robustness of
classifiers and the curvature of the decision boundary.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful comments. We thank Omar Fawzi
and Louis Merlin for the fruitful discussions. We also gratefully acknowledge the support of NVIDIA
Corporation with the donation of the Tesla K40 GPU used for this research. This work has been
partly supported by the Hasler Foundation, Switzerland, in the framework of the CORA project.
8
References
[1] Caramanis, C., Mannor, S., and Xu, H. (2012). Robust optimization in machine learning. In Sra, S.,
Nowozin, S., and Wright, S. J., editors, Optimization for machine learning, chapter 14. Mit Press.
[2] Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Return of the devil in the details:
Delving deep into convolutional nets. In British Machine Vision Conference.
[3] Fawzi, A., Fawzi, O., and Frossard, P. (2015). Analysis of classifiers? robustness to adversarial perturbations.
CoRR, abs/1502.02590.
[4] Fawzi, A. and Frossard, P. (2015). Manitest: Are classifiers really invariant? In British Machine Vision
Conference (BMVC), pages 106.1?106.13.
[5] Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Explaining and harnessing adversarial examples. In
International Conference on Learning Representations (ICLR).
[6] Gu, S. and Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples.
arXiv preprint arXiv:1412.5068.
[7] Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P.,
Sainath, T. N., and Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition:
The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82?97.
[8] Huang, R., Xu, B., Schuurmans, D., and Szepesv?ri, C. (2015). Learning with a strong adversary. CoRR,
abs/1511.03034.
[9] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems (NIPS), pages 1097?1105.
[10] Lanckriet, G., Ghaoui, L., Bhattacharyya, C., and Jordan, M. (2003). A robust minimax approach to
classification. The Journal of Machine Learning Research, 3:555?582.
[11] Lee, J. M. (2009). Manifolds and differential geometry, volume 107. American Mathematical Society
Providence.
[12] Luo, Y., Boix, X., Roig, G., Poggio, T., and Zhao, Q. (2015). Foveation-based mechanisms alleviate
adversarial examples. arXiv preprint arXiv:1511.06292.
[13] Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. (2016). Deepfool: a simple and accurate method to
fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Sabour, S., Cao, Y., Faghri, F., and Fleet, D. J. (2016). Adversarial manipulation of deep representations.
In International Conference on Learning Representations (ICLR).
[15] Shaham, U., Yamada, Y., and Negahban, S. (2015). Understanding adversarial training: Increasing local
stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432.
[16] Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR).
[17] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014).
Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
[18] Tabacof, P. and Valle, E. (2016). Exploring the space of adversarial images. IEEE International Joint
Conference on Neural Networks.
[19] Xu, H., Caramanis, C., and Mannor, S. (2009). Robustness and regularization of support vector machines.
The Journal of Machine Learning Research, 10:1485?1510.
[20] Zhao, Q. and Griffin, L. D. (2016). Suppressing the unusual: towards robust cnns using symmetric
activation functions. arXiv preprint arXiv:1603.05145.
9
| 6331 |@word moosavi:3 dkr:2 version:1 norm:4 seems:1 valle:1 open:2 q1:1 contains:1 mag:2 interestingly:1 bhattacharyya:1 suppressing:1 luo:1 activation:1 intriguing:1 written:2 bd:1 gpu:1 numerical:1 distant:1 remove:1 designed:1 plot:1 plane:2 yamada:1 provides:2 mannor:2 kingsbury:1 along:2 c2:2 mathematical:1 differential:1 sabour:1 fitting:1 inside:2 introduce:1 pairwise:1 x0:33 notably:1 behavior:1 p1:2 frossard:5 multi:1 increasing:1 becomes:2 provided:4 spain:3 notation:4 linearity:1 moreover:3 project:1 argmin:3 q2:1 finding:3 corporation:1 quantitative:4 zaremba:1 exactly:1 classifier:66 k2:49 scaled:1 control:1 unit:2 louis:1 understood:1 local:1 sd:2 despite:2 meet:1 approximately:1 might:3 studied:2 k:4 suggests:3 lausanne:2 challenging:1 limited:1 bi:16 range:2 practical:1 pthat:1 acknowledgment:1 practice:4 union:1 differs:1 intersect:1 empirical:5 vedaldi:1 word:6 suggest:2 onto:1 close:6 pertain:1 wrongly:1 instability:3 writing:1 fruitful:1 reviewer:1 center:1 sainath:1 simplicity:1 deepfool:1 insight:2 importantly:1 spanned:1 pk2:5 shlens:1 datapoints:3 stability:1 notion:2 roig:1 hypothesis:1 goodfellow:2 jaitly:1 lanckriet:1 satisfying:2 recognition:3 observed:4 preprint:4 capture:2 worst:19 region:7 k40:1 environment:1 classifi:1 ideally:2 depend:3 mohsen:1 solving:1 seyed:2 gu:1 translated:1 easily:1 joint:1 various:3 caramanis:2 chapter:1 intersects:1 separated:1 describe:1 choosing:1 harnessing:1 larger:3 cvpr:1 statistic:1 simonyan:2 noisy:1 differentiate:1 indication:1 net:2 propose:2 product:1 remainder:1 causing:1 cao:1 unavoidably:1 dezfooli:2 intuitive:1 tabacof:1 sutskever:2 r1:1 derive:2 donation:1 ac:1 measured:1 dep:1 eq:4 strong:1 p2:2 involves:1 implies:1 quantify:3 switzerland:2 direction:3 radius:7 closely:1 cnns:1 centered:1 pineapple:1 explains:1 fix:1 really:1 anonymous:1 alleviate:1 exploring:1 hold:6 sufficiently:9 considered:5 wright:1 normal:3 bj:1 visualize:2 sought:3 smallest:1 purpose:1 estimation:2 shaham:1 label:6 vulnerability:1 cole:1 largest:1 wl:1 establishes:1 suprisingly:1 cora:1 mit:1 clearly:2 rather:3 overwhelmingly:1 corollary:4 derived:1 ax:2 check:1 opted:1 adversarial:30 helpful:1 epfl:1 typically:1 hidden:2 relation:7 interested:2 pixel:1 classification:20 pascal:2 stateof:1 denoted:1 art:10 constrained:1 special:2 equal:2 represents:1 yu:1 report:1 simplify:2 randomly:6 interpolate:2 geometry:2 argmax:1 ab:2 misclassify:2 message:3 highly:1 accurate:3 poggio:1 intense:1 circle:1 re:1 isolated:1 theoretical:10 fawzi:7 minimal:4 earlier:1 modeling:1 tp:2 altering:1 measuring:1 deviation:1 krizhevsky:1 providence:1 perturbed:3 thanks:1 fundamental:1 international:5 negahban:1 lee:1 w1:1 satisfied:3 huang:1 possibly:1 american:1 zhao:2 return:1 szegedy:2 exclude:1 de:1 b2:3 notable:1 explicitly:1 depends:2 tion:1 root:1 view:1 sup:1 recover:1 contribution:1 minimize:1 square:1 ass:1 accuracy:4 convolutional:3 correspond:1 accurately:1 mc:1 fore:1 drive:1 cation:1 classified:5 datapoint:11 definition:4 mohamed:1 naturally:1 proof:2 con:1 sampled:2 recall:2 knowledge:1 color:1 dimensionality:1 actually:1 reflecting:1 follow:1 planar:2 zisserman:2 bmvc:1 furthermore:1 hand:3 horizontal:1 nonlinear:5 somehow:1 validity:1 normalized:1 regularization:1 hence:2 lenet:4 excluded:1 symmetric:3 illustrated:1 indistinguishable:1 encourages:1 noted:8 polytechnique:1 fj:1 image:10 novel:1 recently:2 fi:1 common:1 behaves:5 rl:2 empirically:3 volume:1 belong:1 extend:1 relating:1 significant:1 refer:1 rd:14 unconstrained:1 fk:6 pm:1 nonlinearity:1 gratefully:1 bruna:1 impressive:1 artichoke:1 curvature:44 recent:2 inf:1 mint:1 belongs:1 manipulation:1 nvidia:1 hostile:1 inequality:3 binary:5 merlin:1 minimum:1 care:1 impose:1 deng:1 signal:1 semi:37 full:1 rj:2 smooth:1 characterized:1 offer:1 sphere:3 cifar:2 equally:2 qi:6 vision:3 expectation:1 arxiv:8 achieved:2 c1:2 szepesv:1 remarkably:1 interval:3 crucial:2 comment:1 subject:1 rigazio:1 seem:1 jordan:1 inscribed:1 near:1 enough:1 architecture:1 vgg:6 qj:1 fleet:1 chatfield:1 speech:2 cause:4 adequate:1 deep:12 generally:2 fool:1 informally:1 occupy:1 estimated:5 group:1 key:2 four:1 terminology:1 nevertheless:1 drawn:1 clarity:1 dahl:1 hasler:1 geometrically:1 fraction:5 fooling:2 inverse:2 extends:1 decision:30 appendix:2 griffin:1 bound:17 uncontrolled:1 constraint:3 precisely:7 ri:6 flat:1 aspect:1 span:4 min:1 relatively:3 conjecture:1 structured:2 tv:2 according:1 ball:7 combination:1 remain:3 smaller:1 perceptible:3 invariant:1 ghaoui:1 taken:2 ln:2 visualization:3 previously:1 turn:1 describing:1 mechanism:1 sending:1 unusual:1 generalizes:4 junction:3 permit:1 observe:1 generic:1 robustness:61 existence:1 original:1 denotes:2 assumes:1 ting:1 especially:2 establish:3 society:1 objective:1 added:1 quantity:12 strategy:1 dependence:2 exhibit:2 iclr:4 subspace:26 distance:4 separate:1 thank:2 nitude:1 separating:1 omar:1 manifold:1 unstable:1 assuming:2 illustration:2 ratio:1 demonstration:1 equivalently:2 difficult:1 mostly:2 relate:1 attenuates:1 perform:1 contributed:1 upper:2 observation:1 datasets:3 benchmark:1 acknowledge:1 curved:1 extended:1 hinton:2 precise:2 perturbation:34 arbitrary:5 intensity:1 overlooked:1 introduced:2 bk:7 namely:3 required:3 pair:2 imagenet:4 acoustic:1 distinction:1 established:1 barcelona:1 nip:4 adversary:1 pattern:1 rale:1 regime:32 summarize:1 max:1 explanation:2 misclassification:5 quantification:1 disturbance:1 minimax:1 improve:1 alhussein:2 axis:1 bending:1 geometric:2 understanding:1 tangent:3 highlight:1 interesting:1 foundation:1 vanhoucke:1 affine:7 editor:1 tiny:1 nowozin:1 translation:1 supported:1 formal:1 senior:1 explaining:1 taking:3 boundary:59 dimension:18 curve:3 world:1 kz:5 author:3 coincide:1 simplified:1 nguyen:1 far:1 erhan:1 hypersurface:1 approximate:1 implicitly:1 confirm:1 global:6 b1:4 assumed:1 fergus:1 table:2 robust:13 delving:1 sra:1 schuurmans:1 excellent:1 complex:1 constructing:1 dense:1 noise:60 allowed:1 tesla:1 xu:3 fig:12 boix:1 deployed:1 explicit:1 exceeding:1 lie:2 krk2:3 intimate:1 theorem:5 departs:1 british:2 specific:1 r2:1 svm:1 mnist:2 corr:2 importance:1 kr:44 mildly:3 suited:1 visual:2 vulnerable:6 ch:1 corresponds:4 hypersurfaces:2 satisfies:1 goal:1 quantifying:1 exposition:1 towards:2 shared:1 experimentally:2 hard:2 change:3 specifically:5 included:2 uniformly:3 infinite:2 averaging:1 except:1 foveation:1 blessing:1 called:1 partly:1 experimental:1 formally:1 support:5 latter:2 devil:1 evaluate:1 phenomenon:3 |
5,894 | 6,332 | Geometric Dirichlet Means algorithm
for topic inference
Mikhail Yurochkin
Department of Statistics
University of Michigan
[email protected]
XuanLong Nguyen
Department of Statistics
University of Michigan
[email protected]
Abstract
We propose a geometric algorithm for topic learning and inference that is built on
the convex geometry of topics arising from the Latent Dirichlet Allocation (LDA)
model and its nonparametric extensions. To this end we study the optimization of a
geometric loss function, which is a surrogate to the LDA?s likelihood. Our method
involves a fast optimization based weighted clustering procedure augmented with
geometric corrections, which overcomes the computational and statistical inefficiencies encountered by other techniques based on Gibbs sampling and variational
inference, while achieving the accuracy comparable to that of a Gibbs sampler. The
topic estimates produced by our method are shown to be statistically consistent
under some conditions. The algorithm is evaluated with extensive experiments on
simulated and real data.
1
Introduction
Most learning and inference algorithms in the probabilistic topic modeling literature can be delineated
along two major lines: the variational approximation popularized in the seminal paper of Blei et al.
(2003), and the sampling based approach studied by Pritchard et al. (2000) and other authors. Both
classes of inference algorithms, their virtues notwithstanding, are known to exhibit certain deficiencies,
which can be traced back to the need for approximating or sampling from the posterior distributions
of the latent variables representing the topic labels. Since these latent variables are not geometrically
intrinsic ? any permutation of the labels yields the same likelihood ? the manipulation of these
redundant quantities tend to slow down the computation, and compromise with the learning accuracy.
In this paper we take a convex geometric perspective of the Latent Dirichlet Allocation, which may
be obtained by integrating out the latent topic label variables. As a result, topic learning and inference
may be formulated as a convex geometric problem: the observed documents correspond to points
randomly drawn from a topic polytope, a convex set whose vertices represent the topics to be inferred.
The original paper of Blei et al. (2003) (see also Hofmann (1999)) contains early hints about a convex
geometric viewpoint, which is left unexplored. This viewpoint had laid dormant for quite some time,
until studied in depth in the work of Nguyen and co-workers, who investigated posterior contraction
behaviors for the LDA both theoretically and practically (Nguyen, 2015; Tang et al., 2014).
Another fruitful perspective on topic modeling can be obtained by partially stripping away the
distributional properties of the probabilistic model and turning the estimation problem into a form
of matrix factorization (Deerwester et al., 1990; Xu et al., 2003; Anandkumar et al., 2012; Arora
et al., 2012). We call this the linear subspace viewpoint. For instance, the Latent Semantic Analysis
approach (Deerwester et al., 1990), which can be viewed as a precursor of the LDA model, looks
to find a latent subspace via singular-value decomposition, but has no topic structure. Notably, the
RecoverKL by Arora et al. (2012) is one of the recent fast algorithms with provable guarantees
coming from the linear subspace perspective.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The geometric perspective continues to be the main force driving this work. We develop and analyze a
new class of algorithms for topic inference, which exploits both the convex geometry of topic models
and the distributional properties they carry. The main contributions in this work are the following: (i)
we investigate a geometric loss function to be optimized, which can be viewed as a surrogate to the
LDA?s likelihood; this leads to a novel estimation and inference algorithm ? the Geometric Dirichlet
Means algorithm, which builds upon a weighted k-means clustering procedure and is augmented with
a geometric correction for obtaining polytope estimates; (ii) we prove that the GDM algorithm is
consistent, under conditions on the Dirichlet distribution and the geometry of the topic polytope; (iii)
we propose a nonparametric extension of GDM and discuss geometric treatments for some of the
LDA extensions; (v) finally we provide a thorough evaluation of our method against a Gibbs sampler,
a variational algorithm, and the RecoverKL algorithm. Our method is shown to be comparable to a
Gibbs sampler in terms of estimation accuracy, but much more efficient in runtime. It outperforms
RecoverKL algorithm in terms of accuracy, in some realistic settings of simulations and in real data.
The paper proceeds as follows. Section 2 provides a brief background of the LDA and its convex
geometric formulation. Section 3 carries out the contributions outlined above. Section 4 presents
experiments results. We conclude with a discussion in Section 5.
2
Background on topic models
In this section we give an overview of the well-known Latent Dirichlet Allocation model for topic
V
modeling (Blei et al., 2003), and the geometry it entails. Let ? ? RK
+ and ? ? R+ be hyperparameters,
where V denotes the number of words in a vocabulary, and K the number of topics. The K topics are
represented as distributions on words: ?k |? ? DirV (?), for k = 1, . . . , K. Each of the M documents
can be generated as follows. First, draw the document topic proportions: ?m |? ? DirK (?), for
m = 1, . . . , M . Next, for each of the Nm words in document m, pick a topic label z and then sample
a word d from the chosen topic:
znm |?m
?
Categorical(?m ); dnm |znm , ?1...K ? Categorical(?znm ).
(1)
Each of the resulting documents is a vector of length Nm with entries dnm ? {1, . . . , V }, where
nm = 1, . . . , Nm . Because these words are exchangeable by the modeling, they are equivalently
represented as a vector of word counts wm ? NV . In practice, the Dirichlet distributions are often
simplified to be symmetric Dirichlet, in which case hyperparameters ?, ? ? R+ and we will proceed
with this setting. Two most common approaches for inference with the LDA are Gibbs sampling
(Griffiths & Steyvers, 2004), based on the Multinomial-Dirichlet conjugacy, and mean-field inference
(Blei et al., 2003). The former approach produces more accurate estimates but is less computationally
efficient than the latter. The inefficiency of both techniques can be traced to the need for sampling or
estimating the (redundant) topic labels. These labels are not intrinsic ? any permutation of the topic
labels yield the same likelihood function.
Convex geometry of topics. By integrating out the latent variables that represent the topic labels,
we obtain a geometric formulation of the LDA. Indeed, integrating z?s out yields that, for m =
1, . . . , M ,
wm |?m , ?1...K , Nm ? Multinomial(pm1 , . . . , pmV , Nm ),
where pmi denotes probability of observing the i-th word from the vocabulary in the m-th document,
and is given by
K
X
pmi =
?mk ?ki for i = 1, . . . , V ; m = 1, . . . , M.
(2)
k=1
The model?s geometry becomes clear. Each topic is represented by a point ?k lying in the V ? 1
dimensional probability simplex ?V ?1 . Let B := Conv(?1 , . . . , ?K ) be the convex hull of the
K topics ?k , then each document corresponds to a point pm := (pm1 , . . . , pmV ) lying inside the
polytope B. This point of view has been proposed before (Hofmann, 1999), although topic proportions
? were not given any geometric meaning. The following treatment of ? lets us relate to the LDA?s
Dirichlet prior assumption and complete the geometric perspective of the problem. The Dirichlet
distribution generates probability vectors ?m , which can be viewed as the
P(random) barycentric
coordinates of the document m with respect to the polytope B. Each pm = k ?mk ?k is a vector of
cartesian coordinates of the m-th document?s multinomial probabilities. Given pm , document m is
2
generated by taking wm ? Multinomial(pm , Nm ). In Section 4 we will show how this interpretation
of topic proportions can be utilized by other topic modeling approaches, including for example the
RecoverKL algorithm of Arora et al. (2012). In the following the model geometry is exploited to
derive fast and effective geometric algorithm for inference and parameter estimation.
3
Geometric inference of topics
We shall introduce a geometric loss function that can be viewed as a surrogate to the LDA?s likelihood.
To begin, let ? denote the K ? V topic matrix with rows ?k , ? be a M ? K document topic
proportions matrix with rows ?m , and W be M ? V normalized word counts matrix with rows
w
?m = wm /Nm .
3.1
Geometric surrogate loss to the likelihood
Unlike the original LDA formulation, here the Dirichlet distribution on ? can be viewed as a prior on
parameters ?. The log-likelihood of the observed corpora of M documents is
!
M X
V
K
X
X
L(?, ?) =
wmi log
?mk ?ki ,
m=1 i=1
k=1
P
where
the parameters ? and ? are subject to constraints i ?ki = 1 for each k = 1, . . . , K, and
P
k ?mk = 1 for each m = 1, . . . , M . Partially relaxing these constraints and keeping only the one
that the sum of all entries for each row of the matrix product ?? is 1, yields the upper bound that
L(?, ?) ? L(W ), where function L(W ) is given by
XX
wmi log w
?mi .
L(W ) =
m
i
We can establish a tighter bound, which will prove useful (the proof of this and other technical results
are in the Supplement):
Proposition 1. Given a fixed topic polytope B and ?. Let Um be the set of words present in document
m, and assume that pmi > 0 ? i ? Um , then
L(W ) ?
M
M
X
X
X 1
1 X
Nm
(w
?mi ? pmi )2 ? L(?, ?) ? L(W ) ?
Nm
(w
?mi ? pmi )2 .
2 m=1
p
mi
m=1
i?Um
i?Um
Since L(W ) is constant, the proposition above shows that maximizing the likelihood has the effect of
minimizing the following quantity with respect to both ? and ?:
X
X
Nm
(w
?mi ? pmi )2 .
m
i
For each fixed ? (and thus B), minimizing first with respect to ? leads to the following
G(B)
:=
min
?
X
m
Nm
X
M
X
(w
?mi ? pmi )2 =
m=1
i
Nm min kx ? w
?m k22 ,
x:x?B
(3)
P
where the second equality in the above display is due pm = k ?mk ?k ? B. The proposition
suggests a strategy for parameter estimation: ? (and B) can be estimated by minimizing the geometric
loss function G:
M
X
min G(B) = min
Nm min kx ? w
?m k22 .
(4)
B
B
m=1
x:x?B
In words, we aim to find a convex polytope B ? ?V ?1 , which is closest to the normalized word
counts w
?m of the observed documents. It is interesting to note the presence of document length Nm ,
which provides the weight for the squared `2 error for each document. Thus, our loss function adapts
to the varying length of documents in the collection. Without the weights, our objective is similar to
the sum of squared errors of the Nonnegative Matrix Factorization(NMF). Ding et al. (2006) studied
3
the relation between the likelihood function of interest and NMF, but with a different objective of
? can be obtained as
? is solved, ?
the NMF problem and without geometric considerations. Once B
? for each document m = 1, . . . , M (cf.
the barycentric coordinates of the projection of w
?m onto B
Eq (3)). We note that if K ? V , then B is a simplex and ?1 , . . . , ?k in general positions are the
extreme points of B, and the barycentric coordinates are unique. (If K > V , the uniqueness no
T ?
longer holds). Finally, p?m = ??m
? gives the cartesian coordinates of a point in B that minimizes
Euclidean distance to the maximum likelihood estimate: p?m = argmin kx ? w
?m k2 . This projection
x?B
is not available in the closed form, but a fast algorithm is available (Golubitsky et al., 2012), which
can easily be extended to find the corresponding distance and to evaluate our geometric objective.
3.2
Geometric Dirichlet Means algorithm
We proceed to devise a procedure for approximately solving the topic polytope B via Eq. (4): first,
obtain an estimate of the underlying subspace based on weighted k-means clustering and then,
estimate the vertices of the polytope that lie on the subspace just obtained via a geometric correction
technique. Please refer to the Supplement for a clarification of the concrete connection between our
geometric loss function and other objectives which arise in subspace learning and weighted k-means
clustering literature, the connection that motivates the first step of our algorithm.
Geometric Dirichlet Means (GDM) algorithm estimates a topic polytope B based on the training
documents (see Algorithm 1). The algorithm is conceptually simple, and consists of two main steps:
First, we perform a (weighted) k-means clustering on the M points w
?1 , . . . , w
?M to obtain the K
centroids ?1 , . . . , ?K , and second, construct a ray emanating from a (weighted) center of the polytope
and extending through each of the centroids ?k until it intersects with a sphere of radius Rk or
with the simplex ?V ?1 (whichever comes first). The intersection point will be our estimate for
vertices ?k , k = 1, . . . , K of the polytope B. The center C of the sphere is given in step 1 of the
algorithm, while Rk = max kC ? w
?m k2 , where the maximum is taken over those documents m
1?m?M
that are clustered with label k. To see the intuition behind the algorithm, let us consider a simple
Algorithm 1 Geometric Dirichlet Means (GDM)
Input: documents w1 , . . . , wM , K,
extension scalar parameters m1 , . . . , mK
Output: topics
P ?1 , . . . , ? K
1
1: C = M
?m {find center of the data}
mw
2: ?1 , . . . , ?K = weighted k-means(w
?1 , . . . , w
?M , K) {find centers of K clusters}.
3: for all k = 1, . . . , K do
4:
?k = C + mk (?k ? C).
5:
if any ?ki < 0 then {threshold topic if it is outside vocabulary simplex ?V ?1 }
6:
for all i = 1, . . . , V do
? 1
7:
?ki = P ik?ki?1ki? >0>0 .
i
ki
8:
end for
9:
end if
10: end for
11: ?1 , . . . , ?K .
simulation experiment. We use the LDA data generative model with ? = 0.1, ? = 0.1, V = 5,
K = 4, M = 5000, Nm = 100. Multidimensional scaling is used for visualization (Fig. 1). We
observe that the k-means centroids (pink) do not represent the topics very well, but our geometric
modification finds extreme points of the tetrahedron: red and yellow spheres overlap, meaning we
found the true topics. In this example, we have used a very small vocabulary size, but in practice V is
much higher and the cluster centroids are often on the boundary of the vocabulary simplex, therefore
we have to threshold the betas at 0. Extending length until Rk is our default choice for the extension
parameters:
mk =
Rk
for k = 1, . . . , K,
kC ? ?k k2
4
(5)
Figure 1: Visualization of GDM: Black, green, red and blue are cluster assignments; purple is the
center, pink are cluster centroids, dark red are estimated topics and yellow are the true topics.
but we will see in our experiments that a careful tuning of the extension parameters based on
optimizing the geometric objective (4) over a small range of mk helps to improve the performance
considerably. We call this tGDM algorithm (tuning details are presented in the Supplement). The
connection between extension parameters and the thresholding is the following: if the cluster centroid
assigns probability to a word smaller than the whole data does on average, this word will be excluded
from topic k with large enough mk . Therefore, the extension parameters can as well be used to
control for the sparsity of the inferred topics.
3.3
Consistency of Geometric Dirichlet Means
We shall present a theorem which provides a theoretical justification for the Geometric Dirichlet
Means algorithm. In particular, we will show that the algorithm can achieve consistent estimates
of the topic polytope, under conditions on the parameters of the Dirichlet distribution of the topic
proportion vector ?m , along with conditions on the geometry of the convex polytope B. The problem
of estimating vertices of a convex polytope given data drawn from the interior of the polytope has
long been a subject of convex geometry ? the usual setting in this literature is to assume the uniform
distribution for the data sample. Our setting is somewhat more general ? the distribution of the points
iid
inside the polytope will be driven by a symmetric Dirichlet distribution setting, i.e., ?m ? DirK (?).
(If ? = 1 this results in the uniform distribution on B.) Let n = K ? 1. Assume that the document
multinomial parameters p1 , . . . , pM (given in Eq. (2)) are the actual data. Now we formulate a
geometric problem linking the population version of k-means and polytope estimation:
Problem 1. Given a convex polytope A ? Rn , a continuous probability density function f (x)
K
F
supported by A, find a K-partition A =
Ak that minimizes:
k=1
K
XZ
k
k?k ? xk22 f (x) dx,
Ak
where ?k is the center of mass of Ak : ?k :=
R
Ak
1
f (x) dx
R
Ak
xf (x) dx.
This problem is closely related to the Centroidal Voronoi Tessellations (Du et al., 1999). This
connection can be exploited to show that
Lemma 1. Problem 1 has a unique global minimizer.
In the following lemma, a median of a simplex is a line segment joining a vertex of a simplex with
the centroid of the opposite face.
Lemma 2. If A ? Rn is an equilateral simplex with symmetric Dirichlet density f parameterized by
?, then the optimal centers of mass of the Problem 1 lie on the corresponding medians of A.
5
Based upon these two lemmas, consistency is established under two distinct asymptotic regimes.
Theorem 1. Let B = Conv(?1 , . . . , ?K ) be the true convex polytope from which the M -sample
iid
p1 , . . . , pM ? ?V ?1 are drawn via Eq. (2), where ?m ? DirK (?) for m = 1, . . . , M .
(a) If B is also an equilateral simplex, then topic estimates obtained by the GDM algorithm
using the extension parameters given in Eq. (5) converge to the vertices of B in probability,
as ? is fixed and M ? ?.
(b) If M is fixed, while ? ? 0 then the topic estimates obtained by the GDM also converge to
the vertices of B in probability.
3.4
nGDM: nonparametric geometric inference of topics
In practice, the number of topics K may be unknown, necessitating a nonparametric probabilistic
approach such as the well-known Hierarchical Dirichlet Process (HDP) (Teh et al., 2006). Our
geometric approach can be easily extended to this situation. The objective (4) is now given by
min G(B) = min
B
B
M
X
m=1
Nm min kx ? w
?m k22 + ?|B|,
x?B
(6)
where |B| denotes the number of extreme points of convex polytope B = Conv(?1 , . . . , ?K ).
Accordingly, our nGDM algorithm now consists of two steps: (i) solve a penalized and weighted
k-means clustering to obtain the cluster centroids (e.g. using DP-means (Kulis & Jordan, 2012));
(ii) apply geometric correction for recovering the extreme points, which proceeds as before. Our
theoretical analysis can be also extended to this nonparametric framework. We note that the penalty
term is reminiscent of the DP-means algorithm of Kulis & Jordan (2012), which was derived under a
small-variance asymptotics regime. For the HDP this corresponds to ? ? 0 ? the regime in part
(b) of Theorem 1. This is an unrealistic assumption in practice. Our geometric correction arguably
enables the accounting of the non-vanishing variance in data. We perform a simulation experiment
for varying values of ? and show that nGDM outperforms the KL version of DP-means (Jiang et al.,
2012) in terms of perplexity. This result is reported in the Supplement.
4
Performance evaluation
Simulation experiments We use the LDA model to simulate data and focus our attention on the
perplexity of held-out data and minimum-matching Euclidean distance between the true and estimated
topics (Tang et al., 2014). We explore settings with varying document lengths (Nm increasing from
10 to 1400 - Fig. 2(a) and Fig. 3(a)), different number of documents (M increasing from 100 to 7000
- Fig. 2(b) and Fig. 3(b)) and when lengths of documents are small, while number of documents
is large (Nm = 50, M ranging from 1000 to 15000 - Fig. 2(c) and Fig. 3(c)). This last setting is
of particular interest, since it is the most challenging for our algorithm, which in theory works well
given long documents, but this is not always the case in practice. We compare two versions of the
Geometric Dirichlet Means algorithm: with tuned extension parameters (tGDM) and the default one
(GDM) (cf. Eq. 5) against the variational EM (VEM) algorithm (Blei et al., 2003) (with tuned
hyperparameters), collapsed Gibbs sampling (Griffiths & Steyvers, 2004) (with true data generating
hyperparameters), and RecoverKL (Arora et al., 2012) and verify the theoretical upper bounds for
topic polytope estimation (i.e. either (log M/M )0.5 or (log Nm /Nm )0.5 ) - cf. Tang et al. (2014)
and Nguyen (2015). We are also interested in estimating each document?s topic proportion via
the projection technique. RecoverKL produced only a topic matrix, which is combined with our
projection based estimates to compute the perplexity (Fig. 3). Unless otherwise specified, we set
? = 0.1, ? = 0.1, V = 1200, M = 1000, K = 5; Nm = 1000 for each m; the number of held-out
documents is 100; results are averaged over 5 repetitions. Since finding exact solution to the k-means
objective is NP hard, we use the algorithm of Hartigan & Wong (1979) with 10 restarts and the
k-means++ initialization. Our results show that (i) Gibbs sampling and tGDM have the best and
almost identical performance in terms of statistical estimation; (ii) RecoverKL and GDM are the
fastest while sharing comparable statistical accuracy; (iii) VEM is the worst in most scenarios due
to its instability (i.e. often producing poor topic estimates); (iv) short document lengths (Fig. 2(c)
and Fig. 3(c)) do not degrade performance of GDM, (this appears to be an effect of the law of large
6
numbers, as the algorithm relies on the cluster means, which are obtained by averaging over a large
number of documents); (v) our procedure for estimating document topic proportions results in a
good quality perplexity of the RecoverKL algorithm in all scenarios (Fig. 3) and could be potentially
utilized by other algorithms. Additional simulation experiments are presented in the Supplement,
which considers settings with varying Nm , ? and the nonparametric extension.
?
?
?
?
?
?
?
?
?
?
?
?
0.050
0.010
?
?
0.005
?
?
0
500
?
?
?
?
?
?
1000
?
?
?
?
?
?
0.000
?
0.000
?
GDM
tGDM
RecoverKL
0
Document length Nm
2000
?
4000
?
6000
?
?
?
?
?
4000
Number of documents M with Nm = 1000
?
?
?
?
8000
?
?
?
?
0.01
?
?
0.000
?
?
?
0.025
?
?
?
GDM
tGDM
Gibbs sampling
VEM
0.1(log(M) M)0.5
RecoverKL
?
?
?
?
0.015
?
?
?
0.03
GDM
tGDM
Gibbs sampling
VEM
0.1(log(M) M)0.5
RecoverKL ?
?
?
0.02
?
?
0.025
0.050
MM distance
?
GDM
tGDM
Gibbs sampling
VEM
0.1(log(Nm) Nm)0.5
RecoverKL
0.075
?
0.020
0.075
?
12000
0.0
Number of documents M with Nm = 50
0.3
?
0.6
0.9
?
GDM
tGDM
Gibbs sampling
VEM
RecoverKL
?
?
?
?
?
?
?
?
?
500
400
280
300
?
?
?
?
325
Perplexity
?
?
?
750
?
?
GDM
tGDM
Gibbs sampling
VEM
RecoverKL
GDM
tGDM
RecoverKL
500
GDM
tGDM
Gibbs sampling
VEM
RecoverKL
290
350
?
600
?
300
375
Figure 2: Minimum-matching Euclidean distance: increasing Nm , M = 1000 (a); increasing M ,
Nm = 1000 (b); increasing M , Nm = 50 (c); increasing ?, Nm = 50, M = 5000 (d).
?
?
?
?
250
?
?
300
?
?
275
270
?
?
?
?
500
1000
Document length Nm
0
260
250
?
0
0
2000
4000
6000
4000
Number of documents M with Nm = 1000
8000
12000
Number of documents M with Nm = 50
0.0
0.3
?
0.6
0.9
Figure 3: Perplexity of the held-out data: increasing Nm , M = 1000 (a); increasing M , Nm = 1000
(b); increasing M , Nm = 50 (c); increasing ?, Nm = 50, M = 5000 (d).
Comparison to RecoverKL Both tGDM and RecoverKL exploit the geometry of the model, but
they rely on very different assumptions: RecoverKL requires the presence of anchor words in the
topics and exploits this in a crucial way (Arora et al., 2012); our method relies on long documents in
theory, even though the violation of this does not appear to degrade its performance in practice, as we
have shown earlier. The comparisons are performed by varying the document length Nm , and varying
the Dirichlet parameter ? (recall that ?k |? ? DirV (?)). In terms of perplexity, RecoverKL, GDM
and tGDM perform similarly (see Fig.4(c,d)), with a slight edge to tGDM. Pronounced differences
come in the quality of topic?s word distribution estimates. To give RecoverKL the advantage, we
considered manually inserting anchor words for each topic generated, while keeping the document
length short, Nm = 50 (Fig. 4(a,c)). We found that tGDM outperforms RecoverKL when ? ? 0.3,
an arguably more common setting, while RecoverKL is more accurate when ? ? 0.5. However, if the
presence of anchor words is not explicitly enforced, tGDM always outperforms RecoverKL in terms
of topic distribution estimation accuracy for all ? (Fig. 2(d)). The superiority of tGDM persists even
as Nm varies from 50 to 10000 (Fig. 4(b)), while GDM is comparable to RecoverKL in this setting.
NIPS corpora analysis We proceed with the analysis of the NIPS corpus.1 After preprocessing,
there are 1738 documents and 4188 unique words. Length of documents ranges from 39 to 1403 with
5
mean of 272. We consider K = 5, 10, 15, 20, ? = K
, ? = 0.1. For each value of K we set aside
300 documents chosen at random to compute the perplexity and average results over 3 repetitions.
Our results are compared against Gibbs sampling, Variational EM and RecoverKL (Table 1). For
K = 10, GDM with 1500 k-means iterations and 5 restarts in R took 50sec; Gibbs sampling with
5000 iterations took 10.5min; VEM with 750 variational, 1500 EM iterations and 3 restarts took
25.2min; RecoverKL coded in Python took 1.1min. We note that with recent developments (e.g.,
1
https://archive.ics.uci.edu/ml/datasets/Bag+of+Words
7
280
275
750
270
GDM
tGDM
RecoverKL
0
260
0.01
250
265
0.005
500
GDM
tGDM
RecoverKL
0.02
GDM
tGDM
RecoverKL
Perplexity
MM distance
0.010
0.03
GDM
tGDM
RecoverKL
0.0
0.3
?
0.6
0
0.9
2500
5000
7500
10000
0.0
Document length Nm
0.3
?
0.6
0.9
0
2500
5000
7500
10000
Document length Nm
Figure 4: MM distance and Perplexity for varying ?, Nm = 50 with anchors (a,c); varying Nm (b,d).
(Hoffman et al., 2013)) VEM could be made faster, but its statistical accuracy remains poor. Although
RecoverKL is as fast as GDM, its perplexity performance is poor and is getting worse with more
topics, which we believe could be due to lack of anchor words in the data. We present topics found
by Gibbs sampling, GDM and RecoverKL for K = 10 in the Supplement.
Table 1: Perplexities of the 4 topic modeling algorithms trained on the NIPS dataset.
K
K
K
K
5
=5
= 10
= 15
= 20
GDM
RecoverKL
VEM
Gibbs sampling
1269
1061
957
763
1378
1235
1409
1586
1980
1953
1545
1352
1168
924
802
704
Discussion
We wish to highlight a conceptual aspect of GDM distinguishing it from moment-based methods
such as RecoverKL. GDM operates on the document-to-document distance/similarity matrix, as
opposed to the second-order word-to-word matrix. So, from an optimization viewpoint, our method
can be viewed as the dual to RecoverKL method, which must require anchor-word assumption to
be computationally feasible and theoretically justifiable. While the computational complexity of
RecoverKL grows with the vocabulary size and not the corpora size, our convex geometric approach
continues to be computationally feasible when number of documents is large: since only documents
near the polytope boundary are relevant in the inference of the extreme points, we can discard most
documents residing near the polytope?s center.
We discuss some potential improvements and extensions next. The tGDM algorithm showed a superior
performance when the extension parameters are optimized. This procedure, while computationally
effective relative to methods such as Gibbs sampler, may still be not scalable to massive datasets. It
seems possible to reformulate the geometric objective as a function of extension parameters, whose
optimization can be performed more efficiently. In terms of theory, we would like to establish the
error bounds by exploiting the connection of topic inference to the geometric problem of Centroidal
Voronoi Tessellation of a convex polytope.
The geometric approach to topic modeling and inference may lend itself naturally to other LDA
extensions, as we have demonstrated with nGDM algorithm for the HDP (Teh et al., 2006). Correlated
topic models of Blei & Lafferty (2006a) also fit naturally into the geometric framework ? we would
need to adjust geometric modification to capture logistic normal distribution of topic proportions
inside the topic polytope. Another interesting direction is to consider dynamic (Blei & Lafferty,
2006b) (extreme points of topic polytope evolving over time) and supervised (McAuliffe & Blei,
2008) settings. Such settings appear relatively more challenging, but they are worth pursuing further.
Acknowledgments
This research is supported in part by grants NSF CAREER DMS-1351362 and NSF CNS-1409303.
8
References
Anandkumar, A., Foster, D. P., Hsu, D., Kakade, S. M., and Liu, Y. A spectral algorithm for Latent Dirichlet
Allocation. Advances in Neural Information Processing Systems, 2012.
Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M. A practical algorithm for
topic modeling with provable guarantees. arXiv preprint arXiv:1212.4777, 2012.
Blei, D. M. and Lafferty, J. D. Correlated topic models. Advances in Neural Information Processing Systems,
2006a.
Blei, D. M. and Lafferty, J. D. Dynamic topic models. In Proceedings of the 23rd international conference on
Machine learning, pp. 113?120. ACM, 2006b.
Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent Dirichlet Allocation. J. Mach. Learn. Res., 3:993?1022, March
2003.
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. Indexing by latent semantic
analysis. Journal of the American Society for Information Science, 41(6):391, Sep 01 1990.
Ding, C., Li, T., and Peng, W. Nonnegative matrix factorization and probabilistic latent semantic indexing:
Equivalence chi-square statistic, and a hybrid method. In Proceedings of the National Conference on Artificial
Intelligence, volume 21, pp. 342. AAAI Press; MIT Press, 2006.
Du, Q., Faber, V., and Gunzburger, M. Centroidal Voronoi Tessellations: applications and algorithms. SIAM
Review, 41(4):637?676, 1999.
Golubitsky, O., Mazalov, V., and Watt, S. M. An algorithm to compute the distance from a point to a simplex.
ACM Commun. Comput. Algebra, 46:57?57, 2012.
Griffiths, T. L. and Steyvers, M. Finding scientific topics. PNAS, 101(suppl. 1):5228?5235, 2004.
Hartigan, J. A. and Wong, M. A. Algorithm as 136: A K-means clustering algorithm. Journal of the Royal
Statistical Society. Series C (Applied Statistics), 28(1):100?108, 1979.
Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. J. Mach. Learn. Res., 14
(1):1303?1347, May 2013.
Hofmann, T. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval, SIGIR ?99, pp. 50?57. ACM,
1999.
Jiang, K., Kulis, B., and Jordan, M. I. Small-variance asymptotics for exponential family Dirichlet process
mixture models. In Advances in Neural Information Processing Systems, pp. 3158?3166, 2012.
Kulis, B. and Jordan, M. I. Revisiting k-means: new algorithms via Bayesian nonparametrics. In Proceedings of
the 29th International Conference on Machine Learning. ACM, 2012.
McAuliffe, J. D. and Blei, D. M. Supervised topic models. In Advances in Neural Information Processing
Systems, pp. 121?128, 2008.
Nguyen, X. Posterior contraction of the population polytope in finite admixture models. Bernoulli, 21(1):
618?646, 02 2015.
Pritchard, J. K., Stephens, M., and Donnelly, P. Inference of population structure using multilocus genotype data.
Genetics, 155(2):945?959, 2000.
Tang, J., Meng, Z., Nguyen, X., Mei, Q., and Zhang, M. Understanding the limiting factors of topic modeling
via posterior contraction analysis. In Proceedings of the 31st International Conference on Machine Learning,
pp. 190?198. ACM, 2014.
Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. Hierarchical Dirichlet processes. Journal of the American
Statistical Association, 101(476), 2006.
Xu, W., Liu, X., and Gong, Y. Document clustering based on non-negative matrix factorization. In Proceedings
of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion
Retrieval, SIGIR ?03, pp. 267?273. ACM, 2003.
9
| 6332 |@word kulis:4 version:3 proportion:8 seems:1 nd:1 simulation:5 contraction:3 decomposition:1 accounting:1 pick:1 carry:2 moment:1 inefficiency:2 contains:1 liu:2 series:1 tuned:2 document:52 outperforms:4 dx:3 reminiscent:1 must:1 realistic:1 partition:1 hofmann:3 enables:1 aside:1 generative:1 intelligence:1 accordingly:1 vanishing:1 short:2 blei:14 provides:3 zhang:1 along:2 beta:1 ik:1 prove:2 consists:2 ray:1 inside:3 introduce:1 theoretically:2 peng:1 notably:1 indeed:1 behavior:1 p1:2 xz:1 chi:1 actual:1 precursor:1 increasing:10 becomes:1 spain:1 estimating:4 conv:3 begin:1 xx:1 underlying:1 mass:2 argmin:1 minimizes:2 finding:2 guarantee:2 thorough:1 unexplored:1 multidimensional:1 runtime:1 um:4 k2:3 exchangeable:1 control:1 grant:1 appear:2 producing:1 arguably:2 superiority:1 before:2 mcauliffe:2 persists:1 harshman:1 mach:2 ak:5 joining:1 jiang:2 meng:1 dnm:2 approximately:1 black:1 initialization:1 studied:3 equivalence:1 suggests:1 relaxing:1 challenging:2 co:1 fastest:1 factorization:4 range:2 statistically:1 averaged:1 unique:3 acknowledgment:1 practical:1 practice:6 procedure:5 znm:3 mei:1 asymptotics:2 faber:1 evolving:1 projection:4 matching:2 word:23 integrating:3 griffith:3 onto:1 interior:1 collapsed:1 seminal:1 instability:1 wong:2 fruitful:1 demonstrated:1 center:8 maximizing:1 attention:1 convex:18 sigir:4 formulate:1 assigns:1 steyvers:3 population:3 coordinate:5 justification:1 limiting:1 massive:1 exact:1 distinguishing:1 utilized:2 continues:2 distributional:2 observed:3 preprint:1 ding:2 solved:1 capture:1 worst:1 wang:1 dirv:2 revisiting:1 intuition:1 complexity:1 dynamic:2 halpern:1 trained:1 solving:1 segment:1 algebra:1 compromise:1 upon:2 easily:2 sep:1 represented:3 intersects:1 equilateral:2 distinct:1 fast:5 effective:2 emanating:1 artificial:1 outside:1 whose:2 quite:1 solve:1 otherwise:1 statistic:4 itself:1 beal:1 advantage:1 took:4 propose:2 coming:1 product:1 inserting:1 uci:1 relevant:1 achieve:1 adapts:1 pronounced:1 getting:1 exploiting:1 cluster:7 extending:2 produce:1 generating:1 help:1 derive:1 develop:1 gong:1 eq:6 recovering:1 involves:1 come:2 direction:1 dormant:1 radius:1 closely:1 hull:1 stochastic:1 require:1 clustered:1 proposition:3 tighter:1 extension:15 correction:5 hold:1 practically:1 lying:2 mm:3 considered:1 ic:1 residing:1 normal:1 driving:1 major:1 early:1 uniqueness:1 estimation:9 bag:1 label:9 repetition:2 weighted:8 hoffman:2 mit:1 wmi:2 always:2 aim:1 varying:8 derived:1 focus:1 improvement:1 bernoulli:1 likelihood:10 centroid:8 inference:18 voronoi:3 relation:1 kc:2 interested:1 dual:1 development:3 field:1 once:1 construct:1 ng:1 sampling:17 manually:1 identical:1 look:1 simplex:10 np:1 hint:1 randomly:1 national:1 geometry:10 cns:1 interest:2 investigate:1 evaluation:2 adjust:1 violation:1 mixture:1 extreme:6 genotype:1 behind:1 held:3 accurate:2 edge:1 worker:1 unless:1 iv:1 euclidean:3 re:2 theoretical:3 mk:10 instance:1 modeling:9 earlier:1 assignment:1 tessellation:3 vertex:7 entry:2 uniform:2 reported:1 stripping:1 varies:1 considerably:1 combined:1 dumais:1 st:1 density:2 international:5 siam:1 probabilistic:5 concrete:1 w1:1 squared:2 aaai:1 nm:46 moitra:1 opposed:1 worse:1 american:2 li:1 potential:1 sec:1 explicitly:1 performed:2 view:1 closed:1 analyze:1 observing:1 red:3 wm:5 vem:11 contribution:2 square:1 purple:1 accuracy:7 variance:3 who:1 efficiently:1 yield:4 correspond:1 yellow:2 pm1:2 conceptually:1 bayesian:1 produced:2 iid:2 worth:1 justifiable:1 sharing:1 tetrahedron:1 against:3 pp:7 dm:1 naturally:2 proof:1 mi:6 hsu:1 dataset:1 treatment:2 recall:1 back:1 appears:1 higher:1 supervised:2 restarts:3 formulation:3 evaluated:1 though:1 nonparametrics:1 just:1 until:3 lack:1 logistic:1 golubitsky:2 lda:15 quality:2 scientific:1 believe:1 grows:1 effect:2 k22:3 normalized:2 true:5 verify:1 former:1 equality:1 pmv:2 excluded:1 symmetric:3 semantic:4 please:1 complete:1 necessitating:1 meaning:2 variational:7 consideration:1 novel:1 ranging:1 common:2 superior:1 multinomial:5 overview:1 volume:1 linking:1 interpretation:1 m1:1 slight:1 association:1 refer:1 gibbs:18 paisley:1 tuning:2 rd:1 outlined:1 pmi:7 pm:7 consistency:2 similarly:1 had:1 entail:1 longer:1 similarity:1 posterior:4 closest:1 recent:2 showed:1 perspective:5 optimizing:1 commun:1 driven:1 perplexity:12 manipulation:1 scenario:2 certain:1 discard:1 exploited:2 devise:1 minimum:2 additional:1 somewhat:1 converge:2 redundant:2 ii:3 stephen:1 pnas:1 technical:1 xf:1 faster:1 sphere:3 long:3 retrieval:2 coded:1 scalable:1 arxiv:2 iteration:3 represent:3 suppl:1 background:2 singular:1 median:2 crucial:1 unlike:1 archive:1 nv:1 subject:2 tend:1 gdm:30 lafferty:4 jordan:6 anandkumar:2 call:2 mw:1 presence:3 near:2 iii:2 enough:1 fit:1 opposite:1 moonfolk:1 penalty:1 sontag:1 proceed:3 centroidal:3 useful:1 xuanlong:2 clear:1 nonparametric:6 dark:1 http:1 nsf:2 estimated:3 arising:1 blue:1 shall:2 donnelly:1 threshold:2 achieving:1 traced:2 drawn:3 hartigan:2 geometrically:1 sum:2 deerwester:3 enforced:1 parameterized:1 multilocus:1 laid:1 almost:1 family:1 pursuing:1 wu:1 draw:1 scaling:1 comparable:4 ki:8 bound:4 display:1 encountered:1 nonnegative:2 annual:2 constraint:2 deficiency:1 generates:1 aspect:1 simulate:1 min:11 relatively:1 department:2 popularized:1 march:1 poor:3 pink:2 watt:1 smaller:1 em:3 kakade:1 delineated:1 modification:2 indexing:3 taken:1 computationally:4 xk22:1 conjugacy:1 visualization:2 remains:1 discus:2 count:3 recoverkl:37 ge:1 whichever:1 end:4 umich:2 available:2 apply:1 observe:1 hierarchical:2 away:1 spectral:1 original:2 denotes:3 dirichlet:27 clustering:8 cf:3 exploit:3 build:1 establish:2 approximating:1 society:2 objective:8 quantity:2 strategy:1 usual:1 surrogate:4 exhibit:1 dp:3 subspace:6 distance:9 simulated:1 degrade:2 topic:76 polytope:28 considers:1 provable:2 hdp:3 length:14 reformulate:1 minimizing:3 equivalently:1 potentially:1 relate:1 yurochkin:1 negative:1 motivates:1 unknown:1 perform:3 teh:3 upper:2 datasets:2 finite:1 situation:1 extended:3 dirk:3 barycentric:3 pritchard:2 rn:2 nmf:3 inferred:2 kl:1 extensive:1 optimized:2 connection:5 specified:1 established:1 barcelona:1 nip:4 informaion:1 proceeds:2 regime:3 sparsity:1 built:1 including:1 max:1 green:1 lend:1 royal:1 unrealistic:1 overlap:1 force:1 rely:1 hybrid:1 turning:1 zhu:1 representing:1 improve:1 brief:1 arora:6 admixture:1 categorical:2 prior:2 geometric:44 literature:3 python:1 review:1 understanding:1 asymptotic:1 law:1 relative:1 loss:7 permutation:2 highlight:1 interesting:2 allocation:5 consistent:3 thresholding:1 viewpoint:4 foster:1 row:4 genetics:1 penalized:1 supported:2 last:1 keeping:2 taking:1 face:1 mikhail:1 mimno:1 boundary:2 depth:1 vocabulary:6 default:2 author:1 collection:1 made:1 preprocessing:1 simplified:1 nguyen:6 overcomes:1 ml:1 global:1 anchor:6 corpus:4 conceptual:1 conclude:1 landauer:1 continuous:1 latent:14 table:2 learn:2 career:1 obtaining:1 du:2 investigated:1 main:3 whole:1 hyperparameters:4 arise:1 xu:2 augmented:2 fig:15 slow:1 furnas:1 position:1 wish:1 exponential:1 comput:1 lie:2 tang:4 down:1 rk:5 theorem:3 virtue:1 intrinsic:2 supplement:6 notwithstanding:1 cartesian:2 kx:4 intersection:1 michigan:2 explore:1 partially:2 scalar:1 corresponds:2 minimizer:1 relies:2 acm:8 viewed:6 formulated:1 careful:1 feasible:2 hard:1 operates:1 sampler:4 averaging:1 lemma:4 clarification:1 latter:1 evaluate:1 correlated:2 |
5,895 | 6,333 | Regularization With Stochastic Transformations and
Perturbations for Deep Semi-Supervised Learning
Mehdi Sajjadi
Mehran Javanmardi
Tolga Tasdizen
Department of Electrical and Computer Engineering
University of Utah
{mehdi, mehran, tolga}@sci.utah.edu
Abstract
Effective convolutional neural networks are trained on large sets of labeled data.
However, creating large labeled datasets is a very costly and time-consuming
task. Semi-supervised learning uses unlabeled data to train a model with higher
accuracy when there is a limited set of labeled data available. In this paper,
we consider the problem of semi-supervised learning with convolutional neural
networks. Techniques such as randomized data augmentation, dropout and random
max-pooling provide better generalization and stability for classifiers that are
trained using gradient descent. Multiple passes of an individual sample through the
network might lead to different predictions due to the non-deterministic behavior
of these techniques. We propose an unsupervised loss function that takes advantage
of the stochastic nature of these methods and minimizes the difference between
the predictions of multiple passes of a training sample through the network. We
evaluate the proposed method on several benchmark datasets.
1
Introduction
Convolutional neural networks (ConvNets) [1, 2] achieve state-of-the-art accuracy on a variety of
computer vision tasks, including classification, object localization, detection, recognition and scene
labeling [3, 4]. The advantage of ConvNets partially originates from their complexity (large number
of parameters), but this can result in overfitting without a large amount of training data. However,
creating a large labeled dataset is very costly. A notable example is the ?ImageNet? [5] dataset with
1000 category and more than 1 million training images. The state-of-the-art accuracy of this dataset
is improved every year using ConvNet-based methods (e.g., [6, 7]). This dataset is the result of
significant manual effort. However, with around 1000 images per category, it barely contains enough
training samples to prevent the ConvNet from overfitting [7]. On the other hand, unlabeled data is
cheap to collect. For example, there are numerous online resources for images and video sequences
of different types. Therefore, there has been an increased interest in exploiting the readily available
unlabeled data to improve the performance of ConvNets.
Randomization plays an important role in the majority of learning systems. Stochastic gradient
descent, dropout [8], randomized data transformation and augmentation [9] and many other training
techniques that are essential for fast convergence and effective generalization of the learning functions
introduce some non-deterministic behavior to the learning system. Due to these uncertainties, passing
a single data sample through a learning system multiple times might lead to different predictions.
Based on this observation, we introduce an unsupervised loss function optimized by gradient descent
that takes advantage of this randomization effect and minimizes the difference in predictions of
multiple passes of a data sample through the network during the training phase, which leads to better
generalization in testing time. The proposed unsupervised loss function specifically regularizes the
network based on the variations caused by randomized data augmentation, dropout and randomized
max-pooling schemes. This loss function can be combined with any supervised loss function. In this
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
paper, we apply the proposed unsupervised loss function to ConvNets as a state-of-the-art supervised
classifier. We show through numerous experiments that this combination leads to a competitive
semi-supervised learning method.
2
Related Work
There are many approaches to semi-supervised learning in general. Self-training and co-training
[10, 11] are two well-known classic examples. Another set of approaches is based on generative
models, for example, methods based on Gaussian Mixture Models (GMM) and Hidden Markov
Models (HMM) [12]. These generative models generally try to use unlabeled data in modeling the
joint probability distribution of the training data and labels. Transductive SVM (TSVM) [13] and
S3VM [14] are another semi-supervised learning approach that tries to find a decision boundary with
a maximum margin on both labeled and unlabeled data. A large group of semi-supervised methods is
based on graphs and the similarities between the samples [15, 16]. For example, if a labeled sample
is similar to an unlabeled sample, its label is assigned to that unlabeled sample. In these methods,
the similarities are encoded in the edges of a graph. Label propagation [17] is an example of these
methods in which the goal is to minimize the difference between model predictions of two samples
with large weighted edge. In other words, similar samples tend to get similar predictions.
In this paper, our focus is on semi-supervised deep learning. There has always been interest in
exploiting unlabeled data to improve the performance of ConvNets. One approach is to use unlabeled
data to pre-train the filters of ConvNet [18, 19]. The goal is to reduce the number of training epochs
required to converge and improve the accuracy compared to a model trained by random initialization.
Predictive sparse decomposition (PSD) [20] is one example of these methods used for learning the
weights in the filter bank layer. The works presented in [21] and [22] are two recent examples of
learning features by pre-training ConvNets using unlabeled data. In these approaches, an auxiliary
target is defined for a pair of unlabeled images [21] or a pair of patches from a single unlabeled
image [22]. Then a pair of ConvNets is trained to learn descriptive features from unlabeled images.
These features can be fine-tuned for a specific task with a limited set of labeled data. However, many
recent ConvNet models with state-of-the-art accuracy start from randomly initialized weights using
techniques such as Xavier?s method [23, 6]. Therefore, approaches that make better use of unlabeled
data during training instead of just pre-training are more desired.
Another example of semi-supervised learning with ConvNets is region embedding [24], which is used
for text categorization. The work in [25] is also a deep semi-supervised learning method based on
embedding techniques. Unlabeled video frames are also being used to train ConvNets [26, 27]. The
target of the ConvNet is calculated based on the correlations between video frames. Another notable
example is semi-supervised learning with ladder networks [28] in which the sums of supervised and
unsupervised loss functions are simultaneously minimized by backpropagation. In this method, a
feedforward model, is assumed to be an encoder. The proposed network consists of a noisy encoder
path and a clean one. A decoder is added to each layer of the noisy path. This decoder is supposed to
reconstruct a clean activation of each layer. The unsupervised loss function is the difference between
the output of each layer in clean path and its corresponding reconstruction from the noisy path.
Another approach by [29] is to take a random unlabeled sample and generate multiple instances by
randomly transforming that sample multiple times. The resulting set of images forms a surrogate
class. Multiple surrogate classes are produced and a ConvNet is trained on them. One disadvantage
of this method is that it does not scale well with the number of unlabeled examples because a separate
class is needed for every training sample during unsupervised training. In [30], the authors propose
a mutual-exclusivity loss function that forces the set of predictions for a multiclass dataset to be
mutually-exclusive. In other words, it forces the classifier?s prediction to be close to one only for
one class and zero for the others. It is shown that this loss function makes use of unlabeled data and
pushes the decision boundary to a less dense area of decision space.
Another set of works related to our approach try to restrict the variations of the prediction function.
Tangent distance and tangent propagation proposed by [31] enforce local classification invariance with
respect to the transformations of input images. Here, we propose a simpler method that additionally
minimizes the internal variations of the network caused by dropout and randomized pooling and leads
to state-of-the-art results on MNIST (with 100 labeled samples), CIFAR10 and CIFAR100. Another
example is Slow Feature Analysis (SFA) (e.g., [32] and [33]) that encourages the representations of
temporally close data to exhibit small differences.
2
3
Method
Given any training sample, a model?s prediction should be the same under any random transformation
of the data and perturbations to the model. The transformations can be any linear and non-linear data
augmentation being used to extend the training data. The disturbances include dropout techniques
and randomized pooling schemes. In each pass, each sample can be randomly transformed or the
hidden nodes can be randomly activated. As a result, the network?s prediction can be different for
multiple passes of the same training sample. However, we know that each sample is assigned to only
one class. Therefore, the network?s prediction is expected to be the same despite transformations
and disturbances. We introduce an unsupervised loss function that minimizes the mean squared
differences between different passes of an individual training sample through the network. Note that
we do not need to know the label of a training sample in order to enforce this loss. Therefore, the
proposed loss function is completely unsupervised and can be used along with supervised training as
a semi-supervised learning method. Even if we don?t have a separate unlabeled set, we can apply the
proposed loss function on samples of labeled set to enforce stability.
Here, we formally define the proposed unsupervised loss function. We start with a dataset with N
training samples and C classes. Let us assume that f j (xi ) is the classifier?s prediction vector on the
i?th training sample during the j?th pass through the network. We assume that each training sample
is passed n times through the network. We define the T j (xi ) to be a random linear or non-linear
transformation on the training sample xi before the j?th pass through the network. The proposed loss
function for each data sample is:
lUTS =
N n?1
n
X
X X
kf j (T j (xi )) ? f k (T k (xi ))k22
(1)
i=1 j=1 k=j+1
Where ?TS? stands for transformation/stability. We pass a training sample through the network n times.
In each pass, the transformation T j (xi ) produces a different input to the network from the original
training sample. In addition, each time the randomness inside the network, which can be caused
by dropout or randomized pooling schemes, leads to a different prediction output. We minimize
the sum of squared differences between each possible pair of predictions. We can minimize this
objective function using gradient descent. Although Eq. 1 is quadratically dependent on the number
of augmented versions of the data (n), calculation of loss and gradient is only based on the prediction
vectors. So, the computing cost is negligible even for large n. Note that recent neural-network-based
methods are optimized on batches of training samples instead of a single sample (batch vs. online
training). We can design batches to contain replications of training samples so we can easily optimize
this transformation/stability loss function. If we use data augmentation, we put different transformed
versions of an unlabeled data in the mini-batch instead of replication. This unsupervised loss function
can be used with any backpropagation-based algorithm. Even though, every mini-batch contains
replications of a training sample, these are used to calculate a single backpropagation signal avoiding
gradient bias and not adversely affecting convergence. It is also possible to combine this loss with any
supervised loss function. We reserve part of the mini-batch for labeled data which are not replicated.
As mentioned in section 2, mutual-exclusivity loss function of [30] forces the classifier?s prediction
vector to have only one non-zero element. This loss function naturally complements the transformation/stability loss function. In supervised learning, each element of the prediction vector is pushed
towards zero or one depending on the corresponding element in label vector. The proposed loss
minimizes the l2 -norm of the difference between predictions of multiple transformed versions of
a sample, but it does not impose any restrictions on the individual elements of a single prediction
vector. As a result, each prediction vector might be a trivial solution instead of a valid prediction
due to lack of labels. Mutual-exclusivity loss function forces each prediction vector to be valid and
prevents trivial solutions. This loss function for the training sample xi is defined as follows:
?
?
N X
n
C
C
X
X
Y
j
j
ME
??
lU =
fk (xi )
(1 ? fl (xi ))?
(2)
i=1 j=1
k=1
l=1,l6=k
Where ?ME? stands for mutual-exclusivity. fkj (xi ) is the k-th element of prediction vector f j (xi ). In
the experiments, we show that the combination of both loss functions leads to further improvements
in the accuracy of the models. We define the combination of both loss functions as transforma3
tion/stability plus mutual-exclusivity loss function:
lU = ?1 lUME + ?2 lUTS
4
(3)
Experiments
We show the effect of the proposed unsupervised loss functions using ConvNets on MNIST [2],
CIFAR10 and CIFAR100 [34], SVHN [35], NORB [36] and ILSVRC 2012 challenge [5]. We use
two frameworks to implement and evaluate the proposed loss function. The first one is cuda-convnet
[37], which is the original implementation of the well-known AlexNet model. The second framework
is the sparse convolutional networks [38] with fractional max-pooling [39], which is a more recent
implementation of ConvNets achieving state-of-the-art accuracy on CIFAR10 and CIFAR100 datasets.
We show through different experiments that by using the proposed loss function, we can improve
the accuracy of the models trained on a few labeled samples on both implementations. In Eq. 1, we
set n to be 4 for experiments conducted using cuda-convnet and 5 for experiments performed using
sparse convolutional networks. Sparse convolutional network allows for any arbitrary batch sizes. As
a result, we tried different options for n and n = 5 is the optimal choice. However, cuda-convnet
allows for mini-batches of size 128. Therefore, it is not possible to use n = 5. Instead, we decided
to use n = 4. In practice the difference is insignificant. We used MNIST to find the optimal n. We
tried different n up to 10 and did not observe improvements for n larger than 5. It must be noted
that replicating a training sample four or five times does not necessarily increase the computational
complexity with the same factor. Based on the experiments, with higher n fewer training epochs
are required for the models to converge. We perform multiple experiments for each dataset. We
use the available training data of each dataset to create two sets: labeled and unlabeled. We do not
use the labels of the unlabeled set during training. It must be noted that for the experiments with
data augmentation, we apply data augmentation to both labeled and unlabeled set. We compare
models that are trained only on the labeled set with models that are trained on both the labeled set
and the unlabeled set using the unsupervised loss function. We show that by using the unsupervised
loss function, we can improve the accuracy of classifiers on benchmark datasets. For experiments
performed using sparse convolutional network, we describe the network parameters using the format
adopted from the original paper [39]:
?
(10kC2 ? F M P 2)5 ? C2 ? C1
In the above example network, 10k is the number of maps in the k?th convolutional layer.
In this
?
example, k = 1, 2, ..., 5. C2 specifies that convolutions use a kernel size of 2. F M P 2 indicates
that convolutional layers are followed
? by a fractional max-pooling (FMP) layer [39] that reduces the
size of feature maps by a factor of 2. As mentioned earlier, the mutual-exclusivity loss function of
[30] complements the transformation/stability loss function. We implement that loss function in both
cuda-convnet and sparse convolutional networks as well. We experimentally choose ?1 and ?2 in Eq.
3. However, the performance of the models is not overly sensitive to these parameters, and in most of
the experiments it is fixed to ?1 = 0.1 and ?2 = 1.
4.1
MNIST
MNIST is the most frequently used dataset in the area of digit classification. It contains 60000
training and 10000 test samples of size 28 ? 28 pixels. We perform experiments ?
on MNIST using a
sparse convolutional network with the following architecture: (32kC2 ? F M P 2)6 ? C2 ? C1.
We use dropout to regularize the network. The ratio of dropout gradually increases from the first
layer to the last layer. We do not use any data augmentation for this task. In other words, T j (xi )
of Eq. 1 is identity function for this dataset. In this case, we take advantage of the random effects
of dropout and fractional max-pooling using the unsupervised loss function. We randomly select
10 samples from each class (total of 100 labeled samples). We use all available training data as
the unlabeled set. First, we train a model based on this labeled set only. Then, we train models
by adding unsupervised loss functions. In separate experiments, we add transformation/stability
loss function, mutual-exclusivity loss function and the combination of both. Each experiment is
repeated five times with a different random subset of training samples. We repeat the same set of
experiments using 100% of MNIST training samples. The results are given in Table 1. We can
see that the proposed loss significantly improves the accuracy on test data. We also compare the
results with ladder networks [28]. Combination of both loss functions reduces the error rate to 0.55%
4
? 0.16 which is the state-of-the-art for the task of MNIST with 100 labeled samples to the best
of our knowledge. The state-of-the-art error rate on MNIST using all training data without data
augmentation is 0.24% [40]. It can be seen that we can achieve a close accuracy by using only 100
labeled samples.
Table 1: Error rates (%) on test set for MNIST (mean % ? std).
100 :
all:
4.2
labeled
data only
transform
/stability loss
mut-excl
loss [30]
both
losses
ladder
net. [28]
ladder net
baseline [28]
5.44 ? 1.48
0.32 ? 0.02
0.76 ? 0.61
0.29 ? 0.02
3.92 ? 1.12
0.30 ? 0.03
0.55 ? 0.16
0.27 ? 0.02
0.89 ? 0.50
-
6.43 ? 0.84
0.36
SVHN and NORB
SVHN is another digit classification task similar to MNIST. This dataset contains about 70000 images
for training and more than 500000 easier images [35] for validation. We do not use the validation
set. The test set contains 26032 images, which are RGB images of size 32 ? 32. Generally, SVHN
is a more difficult task compared to MNIST because of the large variations in the images. We do
not perform any pre-processing for this dataset. We simply convert the color images to grayscale
by removing hue and saturation information. NORB is a collection of stereo images in six classes.
The training set contains 10 folds of 29160 images. It is common practice to use only the first
two folds for training. The test set contains two folds, totaling 58320. The original images are
108 ? 108. However, we scale them down to 48 ? 48 similar to [9]. We perform experiments on
these two datasets using both cuda-convnet and sparse convolutional network implementations of the
unsupervised loss function.
In the first set of experiments, we use cuda-convnet to train models with different ratios of labeled and
unlabeled data. We randomly choose 1%, 5%, 10%, 20% and 100% of training samples as labeled
data. All of the training samples are used as the unlabeled set. For each labeled set, we train four
models using cuda-convnet. The first model uses labeled set only. The second model is trained on
unlabeled set using mutual-exclusivity loss function in addition to the labeled set. The third model is
trained on the unlabeled set using the transformation/stability loss function in addition to the labeled
set. The last model is also trained on both sets but combines two unsupervised loss functions. Each
experiment is repeated five times. For each repetition, we use a different subset of training samples as
labeled data. The cuda-convnet model consists of two convolutional layers with 64 maps and kernel
size of 5, two locally connected layers with 32 maps and kernel size 3. Each convolutional layer is
followed by a max-pooling layer. A fully connected layer with 256 nodes is added before the last
layer. We use data augmentation for these experiments. T j (xi ) of Eq. 1 crops every training sample
to 28 ? 28 for SVHN and 44 ? 44 for NORB at random locations. T j (xi ) also randomly rotates
training samples up to ?20? . These transformations are applied to both labeled and unlabeled sets.
The results are shown in Figure 1 for SVHN and Figure 2 for NORB. Each point in the graph is the
mean error rate of five repetitions. The error bars show the standard deviation of these five repetitions.
As expected, we can see that in all experiments the classification accuracy is improved as we add
more labeled data. However, we observe that for each set of labeled data we can improve the results
by using the proposed unsupervised loss functions. We can also see that when the number of labeled
samples is small, the improvement is more significant. For example, when we use only 1% of labeled
data, we gain an improvement in accuracy of about 2.5 times by using unsupervised loss functions.
As we add more labeled samples, the difference in accuracy between semi-supervised and supervised
approaches becomes smaller. Note that the combination of transformation/stability loss function and
mutual-exclusivity loss function improves the accuracy even further. As mentioned earlier, these two
unsupervised loss functions complement each other. Therefore, in most of the experiments we use
the combination of two unsupervised loss functions.
We perform another set of experiments on these two datasets using sparse convolutional networks as a
state-of-the-art classifier. We create five sets of labeled data. For each set, we randomly pick a different
1% subset of training samples as labeled set and all training data as unlabeled set. We train two
models: the first trained only on labeled data, and the second using the labeled set and a combination
of both unsupervised losses. Similarly, we train models using all available training data as both the
labeled set and unlabeled set. We do not use data augmentation for any of these experiments. In other
5
SVHN
NORB
22
both unsupervised losses
labeled data only
unsupervised transformation/stability loss
unsupervised mutual?exclusivity loss
25
both unsupervised losses
labeled data only
unsupervised transformation/stability loss
unsupervised mutual?exclusivity loss
20
18
16
Error rate (%)
Error rate (%)
20
15
14
12
10
10
8
6
5
4
1
5
10
20
Percent of labeled data
100
1
Figure 1: SVHN dataset: semi-supervised learning vs. training with labeled data only.
5
10
20
Percent of labeled data
100
Figure 2: NORB dataset: semi-supervised learning vs. training with labeled data only.
words, T j (xi ) of Eq. 1 is identity function. As a result, dropout and random max-pooling
are the only
?
sources of variation in this case. We use the following model: (32kC2 ? F M P 3 2)12 ? C2 ? C1.
Similar to MNIST, we use dropout to regularize the network. Again, the ratio of dropout gradually
increases from the first layer to the last layer. The results (average of five error rates) are shown in
Table 2. Here, we can see that by using unsupervised loss functions we can significantly improve the
accuracy of the classifier by trying to minimize the variation in prediction of the network. In addition,
for NORB dataset we can observe that by using only 1% of labeled data and applying unsupervised
loss functions, we can achieve accuracy that is close to the case when we use 100% of labeled data.
Table 2: Error on test data for SVHN and NORB with 1% and 100% of data (mean % ? std).
SVHN
1% of data
100% of data
labeled data only:
semi-supervised:
4.3
12.25 ? 0.80
6.03 ? 0.62
2.28 ? 0.05
2.22 ? 0.04
NORB
1% of data
100% of data
10.01 ? 0.81
2.15 ? 0.37
1.63 ? 0.12
1.63 ? 0.07
CIFAR10
CIFAR10 is a collection of 60000 tiny 32 ? 32 images of 10 categories (50000 for training and 10000
for test). We use sparse convolutional networks to perform experiments on this dataset. For this
dataset, we create 10 labeled sets. Each set contains 4000 samples that are randomly picked from
the training set. All 50000 training samples are used as unlabeled set. We train two sets of models
on these data. The first set of models is trained on labeled data only, and the other set of models
is trained on the unlabeled set using a combination of both unsupervised loss functions in addition
to the labeled set. For this dataset, we do not perform separate experiments for two unsupervised
loss functions because of time constraints. However, based on the results from MNIST, SVHN and
NORB, we deduce that the combination of both unsupervised losses provides improved accuracy.
We use data augmentation for these experiments. Similar to [39], we perform affine transformations,
including randomized mix of translations, rotations, flipping, stretching and shearing operations
by T j (xi ) of Eq. 1. Similar to [39], we train the network without transformations
for the last 10
?
epochs. We use the following parameters for the models: (32kC2 ? F M P 3 2)12 ? C2 ? C1.
We use dropout, and its ratio gradually increases from the first layer to the last layer. The results
are given in Table 3. We also compare the results to ladder networks [28]. The model in [28]
does not use data augmentation. We can see that the combination of unsupervised loss functions
on unlabeled data improves the accuracy of the models. In another set of experiments, we use all
available training data as both ?
labeled and unlabeled sets. We train a network with the following
parameters: (96kC2 ? F M P 3 2)12 ? C2 ? C1. We use affine transformations for this task too.
Here again, we use transformation/stability plus the mutual-exclusivity loss function. We repeat
this experiments five times and achieve 3.18% ? 0.1 mean and standard deviation error rate. The
6
Table 3: Error rates on test data for CIFAR10 with 4000 labeled samples (mean % ? std).
transformation/stability+mutual-exclusivity
ladder networks [28]
13.60 ? 0.24
11.29 ? 0.24
23.33 ? 0.61
20.40 ? 0.47
labeled data only:
semi-supervised:
state-of-the-art error rate for this dataset is 3.47%, achieved by the fractional max-pooling method
[39] but obtained with a larger model (160n vs. 96n). We perform a single run experiment with 160n
model and achieve the error rate of 3.00%. Similar to [39], we perform 100 passes during test time.
Here, we surpass state-of-the-art accuracy by adding unsupervised loss functions.
4.4
CIFAR100
CIFAR100 is also a collection of 60000 tiny images of size 32 ? 32. This dataset is similar to
CIFAR10. However, it contains images of 100 categories compared to 10. Therefore, we have a
smaller number of training samples per category. Similar to CIFAR10, we perform experiments on
this dataset using sparse convolutional networks. We use all available training data as both labeled
and unlabeled sets. The state-of-the-art error rate for this dataset is 23.82%, obtained by fractional
max-pooling [39] on sparse convolutional
networks. The following model was used to achieve this
?
error rate: (96kC2 ? F M P 3 2)12 ? C2 ? C1. Dropout was also used with a ratio increasing from
the first layer to the last layer. We use the same model parameters and add transformation/stability
plus the mutual-exclusivity loss function. Similar to [39], we do not use data augmentation for this
task (T j (xi ) of Eq. 1 is identity function). Therefore, the proposed loss function minimizes the
randomness effect due to dropout and max-pooling. We achieve 21.43% ? 0.16 mean and standard
deviation error rate, which is the state-of-the-art for this task. We perform 12 passes during the test
time similar to [39].
4.5
ImageNet
We perform experiments on the ILSVRC 2012 challenge. The training data consists of 1281167
natural images of different sizes from 1000 categories. We create five labeled datasets from available
training samples. Each dataset consists of 10% of training data. We form each dataset by randomly
picking a subset of training samples. All available training data is used as the unlabeled set. We use
cuda-convnet to train AlexNet model [7] for this dataset. Similar to [7], all images are re-sized to
256 ? 256. We also use data augmentation for this task following steps of [7], i.e., T j (xi ) of Eq. 1
performs random translations, flipping and color noise. We train two models on each labeled dataset.
One model is trained using labeled data only. The other model is trained on both labeled and unlabeled
set using the transformation/stability plus mutual-exclusivity loss function. At each iteration, we
generate four different transformed versions of each unlabeled sample. So, each unlabeled sample
is forward passed through the network four times. Since we use all training data as unlabeled set,
the computational cost of each iteration is roughly quadrupled. But, in practice we found that when
we use 10% of training data as labeled set, the network converges in 20 epochs instead of standard
90 epochs of AlexNet model. So, overall cost of our method for ImageNet is less than or equal to
AlexNet. The results on validation set are shown in Table 4. We also compare the results to the
model trained on the mutual-exclusivity loss function only and reported in [30]. We can see that
even for a large dataset with many categories, the proposed unsupervised loss function improves the
classification accuracy. The error rate of a single AlexNet model on validation set of ILSVRC 2012
using all training data is 18.2% [7].
Table 4: Error rates (%) on validation set for ILSVR 2012 (Top-5).
labeled only:
semi-sup:
rep 1
rep 2
rep 3
rep 4
rep 5
mean
? std
mutual
xcl [30]
[21] ?1.5%
of data
45.73
39.50
46.15
39.99
46.06
39.94
45.57
39.70
46.08
40.08
45.91 ? 0.25
39.84 ? 0.23
45.63
42.90
85.9
84.2
7
5
Discussion
We can see that the proposed loss function can improve the accuracy of a ConvNet regardless of the
architecture and implementation. We improve the accuracy of two relatively different implementations
of ConvNets, i.e., cuda-convnet and sparse convolutional networks. For SVHN and NORB, we do not
use dropout or randomized pooling for the experiments performed using cuda-convnet. Therefore, the
only source of variation in different passes of a sample through the network is random transformations
(translation and rotation). For the experiments performed using sparse convolutional networks on
these two datasets, we do not use data transformation. Instead, we use dropout and randomized
pooling. Based on the results, we can see that in both cases we can significantly improve the accuracy
when we have a small number of labeled samples. For CIFAR100, we achieve state-of-the-art error
rate of 21.43% by taking advantage of the variations caused by dropout and randomized pooling. In
ImageNet and CIFAR10 experiments, we use both data transformation and dropout. For CIFAR10,
we also have randomized pooling and achieve the state-of-the-art error rate of 3.00%. In MNIST
experiments with 100 labeled samples and NORB experiments with 1% of labeled data, we achieve
accuracy reasonably close to the case when we use all available training data by applying mutualexclusivity loss and minimizing the difference in predictions of multiple passes caused by dropout
and randomized pooling.
6
Conclusion
In this paper, we proposed an unsupervised loss function that minimizes the variations in different
passes of a sample through the network caused by non-deterministic transformations and randomized dropout and max-pooling schemes. We evaluated the proposed method using two ConvNet
implementations on multiple benchmark datasets. We showed that it is possible to achieve significant improvements in accuracy by using the transformation/stability loss function along with
mutual-exclusivity of [30] when we have a small number of labeled data available.
Acknowledgments
This work was supported by NSF IIS-1149299.
References
[1] B. B. Le Cun, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, ?Handwritten
digit recognition with a back-propagation network,? in Advances in neural information processing systems,
Citeseer, 1990.
[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,?
Proceedings of the IEEE, vol. 86, no. 11, pp. 2278?2324, 1998.
[3] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, ?Overfeat: Integrated recognition,
localization and detection using convolutional networks,? arXiv preprint arXiv:1312.6229, 2013.
[4] J. Long, E. Shelhamer, and T. Darrell, ?Fully convolutional networks for semantic segmentation,? in
Computer Vision and Pattern Recognition, pp. 3431?3440, 2015.
[5] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al., ?Imagenet large scale visual recognition challenge,? International Journal of Computer
Vision, vol. 115, no. 3, pp. 211?252, 2015.
[6] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich,
?Going deeper with convolutions,? in Computer Vision and Pattern Recognition, pp. 1?9, 2015.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural
networks,? in Advances in neural information processing systems, pp. 1097?1105, 2012.
[8] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, ?Improving neural
networks by preventing co-adaptation of feature detectors,? arXiv preprint arXiv:1207.0580, 2012.
[9] D. Ciresan, U. Meier, and J. Schmidhuber, ?Multi-column deep neural networks for image classification,?
in Computer Vision and Pattern Recognition, pp. 3642?3649, IEEE, 2012.
[10] A. Blum and T. Mitchell, ?Combining labeled and unlabeled data with co-training,? in Proceedings of the
eleventh annual conference on Computational learning theory, pp. 92?100, ACM, 1998.
[11] V. R. de Sa, ?Learning classification with unlabeled data,? in Advances in neural information processing
systems, pp. 112?119, 1994.
8
[12] D. J. Miller and H. S. Uyar, ?A mixture of experts classifier with learning based on both labelled and
unlabelled data,? in Advances in neural information processing systems, pp. 571?577, 1997.
[13] T. Joachims, ?Transductive inference for text classification using support vector machines,? in ICML,
vol. 99, pp. 200?209, 1999.
[14] K. Bennett, A. Demiriz, et al., ?Semi-supervised support vector machines,? Advances in Neural Information
processing systems, pp. 368?374, 1999.
[15] A. Blum and S. Chawla, ?Learning from labeled and unlabeled data using graph mincuts,? 2001.
[16] X. Zhu, Z. Ghahramani, J. Lafferty, et al., ?Semi-supervised learning using gaussian fields and harmonic
functions,? in International Conference on Machine Learning, vol. 3, pp. 912?919, 2003.
[17] X. Zhu and Z. Ghahramani, ?Learning from labeled and unlabeled data with label propagation,? tech. rep.,
Citeseer, 2002.
[18] Y. LeCun, K. Kavukcuoglu, C. Farabet, et al., ?Convolutional networks and applications in vision.,? in
ISCAS, pp. 253?256, 2010.
[19] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, ?What is the best multi-stage architecture for
object recognition?,? in International Conference on Computer Vision, pp. 2146?2153, IEEE, 2009.
[20] K. Kavukcuoglu, M. Ranzato, and Y. LeCun, ?Fast inference in sparse coding algorithms with applications
to object recognition,? arXiv preprint arXiv:1010.3467, 2010.
[21] P. Agrawal, J. Carreira, and J. Malik, ?Learning to see by moving,? in International Conference on
Computer Vision, pp. 37?45, 2015.
[22] C. Doersch, A. Gupta, and A. A. Efros, ?Unsupervised visual representation learning by context prediction,?
in International Conference on Computer Vision, pp. 1422?1430, 2015.
[23] X. Glorot and Y. Bengio, ?Understanding the difficulty of training deep feedforward neural networks,? in
International conference on artificial intelligence and statistics, pp. 249?256, 2010.
[24] R. Johnson and T. Zhang, ?Semi-supervised convolutional neural networks for text categorization via
region embedding,? in Advances in Neural Information Processing Systems, pp. 919?927, 2015.
[25] J. Weston, F. Ratle, H. Mobahi, and R. Collobert, ?Deep learning via semi-supervised embedding,? in
Neural Networks: Tricks of the Trade, pp. 639?655, Springer, 2012.
[26] X. Wang and A. Gupta, ?Unsupervised learning of visual representations using videos,? in International
Conference on Computer Vision, pp. 2794?2802, 2015.
[27] D. Jayaraman and K. Grauman, ?Learning image representations tied to ego-motion,? in International
Conference on Computer Vision, pp. 1413?1421, 2015.
[28] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko, ?Semi-supervised learning with ladder
networks,? in Advances in Neural Information Processing Systems, pp. 3532?3540, 2015.
[29] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox, ?Discriminative unsupervised feature
learning with convolutional neural networks,? in Advances in Neural Information Processing Systems,
pp. 766?774, 2014.
[30] M. Sajjadi, M. Javanmardi, and T. Tasdizen, ?Mutual exclusivity loss for semi-supervised deep learning,?
in International Conference on Image Processing, IEEE, 2016.
[31] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri, ?Transformation invariance in pattern recognition?tangent distance and tangent propagation,? in Neural networks: tricks of the trade, pp. 239?274,
Springer, 1998.
[32] D. Jayaraman and K. Grauman, ?Slow and steady feature analysis: higher order temporal coherence in
video,? Computer Vision and Pattern Recognition, 2016.
[33] L. Sun, K. Jia, T.-H. Chan, Y. Fang, G. Wang, and S. Yan, ?Dl-sfa: deeply-learned slow feature analysis for
action recognition,? in Computer Vision and Pattern Recognition, pp. 2625?2632, 2014.
[34] A. Krizhevsky and G. Hinton, ?Learning multiple layers of features from tiny images,? 2009.
[35] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, ?Reading digits in natural images with
unsupervised feature learning,? in NIPS workshop on deep learning and unsupervised feature learning,
vol. 2011, p. 4, Granada, Spain, 2011.
[36] Y. LeCun, F. J. Huang, and L. Bottou, ?Learning methods for generic object recognition with invariance to
pose and lighting,? in Computer Vision and Pattern Recognition, vol. 2, pp. II?97, IEEE, 2004.
[37] A. Krizhevskey, ?Cuda-convnet.? code.google.com/p/cuda-convnet, 2014.
[38] B. Graham, ?Spatially-sparse convolutional neural networks,? arXiv preprint arXiv:1409.6070, 2014.
[39] B. Graham, ?Fractional max-pooling,? arXiv preprint arXiv:1412.6071, 2014.
[40] J.-R. Chang and Y.-S. Chen, ?Batch-normalized maxout network in network,? arXiv preprint
arXiv:1511.02583, 2015.
9
| 6333 |@word version:4 norm:1 tried:2 rgb:1 decomposition:1 citeseer:2 pick:1 sajjadi:2 liu:1 contains:9 tuned:1 document:1 com:1 activation:1 must:2 readily:1 cheap:1 v:4 generative:2 fewer:1 intelligence:1 bissacco:1 provides:1 node:2 location:1 simpler:1 zhang:2 five:9 along:2 c2:7 replication:3 consists:4 combine:2 eleventh:1 inside:1 introduce:3 jayaraman:2 shearing:1 expected:2 roughly:1 behavior:2 frequently:1 multi:2 ratle:1 salakhutdinov:1 increasing:1 becomes:1 spain:2 alexnet:5 what:1 minimizes:7 sfa:2 transformation:31 temporal:1 every:4 grauman:2 classifier:9 originates:1 before:2 negligible:1 engineering:1 local:1 despite:1 path:4 might:3 plus:4 initialization:1 collect:1 co:3 limited:2 jarrett:1 decided:1 acknowledgment:1 lecun:7 testing:1 practice:3 implement:2 backpropagation:3 digit:4 area:2 riedmiller:1 yan:1 significantly:3 word:4 tolga:2 pre:4 get:1 unlabeled:46 close:5 put:1 context:1 applying:2 optimize:1 restriction:1 deterministic:3 map:4 regardless:1 regularize:2 fang:1 fkj:1 stability:18 classic:1 embedding:4 variation:9 cifar100:6 target:2 play:1 us:2 trick:2 element:5 ego:1 recognition:16 std:4 labeled:67 exclusivity:18 role:1 preprint:6 electrical:1 wang:3 calculate:1 region:2 connected:2 sun:1 ranzato:2 trade:2 deeply:1 mentioned:3 transforming:1 complexity:2 trained:17 predictive:1 localization:2 completely:1 easily:1 joint:1 train:14 fast:2 effective:2 describe:1 artificial:1 labeling:1 encoded:1 larger:2 reconstruct:1 encoder:2 statistic:1 transductive:2 transform:1 noisy:3 demiriz:1 online:2 advantage:5 sequence:1 descriptive:1 net:2 agrawal:1 propose:3 reconstruction:1 adaptation:1 combining:1 achieve:11 supposed:1 exploiting:2 convergence:2 sutskever:2 darrell:1 produce:1 categorization:2 converges:1 object:4 depending:1 pose:1 sa:1 eq:9 auxiliary:1 filter:2 stochastic:3 generalization:3 randomization:2 around:1 reserve:1 efros:1 label:8 jackel:1 honkala:1 sensitive:1 hubbard:1 repetition:3 create:4 weighted:1 gaussian:2 always:1 totaling:1 focus:1 joachim:1 improvement:5 indicates:1 tech:1 baseline:1 inference:2 dependent:1 integrated:1 hidden:2 transformed:4 going:1 pixel:1 overall:1 classification:10 overfeat:1 art:15 mutual:19 brox:1 equal:1 field:1 ng:1 unsupervised:43 icml:1 minimized:1 others:1 dosovitskiy:1 few:1 randomly:10 simultaneously:1 individual:3 mut:1 phase:1 iscas:1 psd:1 detection:2 interest:2 henderson:1 mixture:2 activated:1 edge:2 cifar10:10 netzer:1 initialized:1 desired:1 re:1 increased:1 instance:1 modeling:1 earlier:2 column:1 disadvantage:1 rabinovich:1 cost:3 deviation:3 subset:4 krizhevsky:3 conducted:1 johnson:1 too:1 reported:1 combined:1 international:9 randomized:14 xcl:1 picking:1 luts:2 augmentation:15 squared:2 again:2 choose:2 huang:2 berglund:1 adversely:1 creating:2 expert:1 simard:1 szegedy:1 de:1 coding:1 notable:2 caused:6 collobert:1 tion:1 try:3 performed:4 picked:1 sup:1 competitive:1 start:2 option:1 tsvm:1 jia:2 minimize:4 accuracy:26 convolutional:27 stretching:1 miller:1 handwritten:1 kavukcuoglu:3 produced:1 lu:2 lighting:1 russakovsky:1 randomness:2 detector:1 manual:1 farabet:1 pp:26 naturally:1 gain:1 dataset:27 mitchell:1 color:2 knowledge:1 fractional:6 improves:4 segmentation:1 back:1 higher:3 supervised:30 improved:3 evaluated:1 though:1 just:1 stage:1 convnets:12 correlation:1 hand:1 mehdi:2 su:1 propagation:5 lack:1 google:1 utah:2 effect:4 k22:1 contain:1 normalized:1 xavier:1 regularization:1 assigned:2 spatially:1 semantic:1 during:7 self:1 encourages:1 noted:2 steady:1 trying:1 performs:1 motion:1 svhn:12 percent:2 image:27 harmonic:1 common:1 rotation:2 million:1 extend:1 significant:3 anguelov:1 doersch:1 fk:1 similarly:1 replicating:1 moving:1 similarity:2 deduce:1 add:4 recent:4 showed:1 chan:1 schmidhuber:1 rep:6 quadrupled:1 seen:1 impose:1 deng:1 converge:2 signal:1 semi:24 ii:2 multiple:13 mix:1 reduces:2 unlabelled:1 calculation:1 long:1 prediction:27 crop:1 vision:14 mehran:2 arxiv:12 iteration:2 kernel:3 achieved:1 c1:6 addition:5 affecting:1 fine:1 krause:1 victorri:1 source:2 pass:10 pooling:20 tend:1 lafferty:1 feedforward:2 bengio:2 enough:1 bernstein:1 variety:1 architecture:3 restrict:1 ciresan:1 reduce:1 haffner:1 multiclass:1 javanmardi:2 six:1 passed:2 effort:1 stereo:1 passing:1 action:1 deep:9 generally:2 karpathy:1 amount:1 hue:1 locally:1 category:7 generate:2 specifies:1 nsf:1 coates:1 cuda:13 overly:1 per:2 vol:6 group:1 four:4 blum:2 achieving:1 prevent:1 gmm:1 clean:3 graph:4 s3vm:1 year:1 sum:2 convert:1 run:1 uncertainty:1 springenberg:1 wu:1 patch:1 decision:3 coherence:1 graham:2 pushed:1 dropout:21 layer:22 fl:1 followed:2 fold:3 annual:1 constraint:1 scene:1 format:1 relatively:1 department:1 combination:11 smaller:2 cun:1 gradually:3 resource:1 mutually:1 needed:1 know:2 adopted:1 available:11 operation:1 apply:3 observe:3 denker:2 enforce:3 generic:1 chawla:1 batch:9 eigen:1 original:4 top:1 include:1 l6:1 ghahramani:2 objective:1 malik:1 added:2 flipping:2 costly:2 exclusive:1 surrogate:2 exhibit:1 gradient:7 convnet:21 separate:4 distance:2 sci:1 rotates:1 majority:1 hmm:1 decoder:2 me:2 valpola:1 trivial:2 barely:1 code:1 reed:1 mini:4 ratio:5 minimizing:1 sermanet:2 rasmus:1 difficult:1 design:1 implementation:7 satheesh:1 perform:13 observation:1 convolution:2 datasets:9 markov:1 benchmark:3 howard:1 descent:4 t:1 regularizes:1 hinton:3 frame:2 perturbation:2 arbitrary:1 complement:3 pair:4 required:2 meier:1 optimized:2 imagenet:6 quadratically:1 learned:1 barcelona:1 nip:2 bar:1 pattern:7 reading:1 challenge:3 saturation:1 max:12 including:2 video:5 natural:2 force:4 disturbance:2 difficulty:1 zhu:2 scheme:4 improve:10 ladder:7 numerous:2 temporally:1 mathieu:1 raiko:1 text:3 epoch:5 understanding:1 l2:1 tangent:4 kf:1 loss:82 fully:2 validation:5 shelhamer:1 vanhoucke:1 affine:2 bank:1 tiny:3 granada:1 tasdizen:2 translation:3 repeat:2 last:7 supported:1 bias:1 deeper:1 taking:1 sparse:16 boundary:2 calculated:1 stand:2 valid:2 preventing:1 author:1 collection:3 forward:1 replicated:1 erhan:1 overfitting:2 assumed:1 consuming:1 xi:18 norb:13 fergus:1 don:1 grayscale:1 discriminative:1 khosla:1 table:8 additionally:1 nature:1 learn:1 reasonably:1 improving:1 bottou:2 necessarily:1 kc2:6 did:1 dense:1 noise:1 repeated:2 augmented:1 slow:3 tied:1 third:1 removing:1 down:1 specific:1 mobahi:1 insignificant:1 svm:1 gupta:2 glorot:1 dl:1 essential:1 workshop:1 mnist:15 adding:2 push:1 margin:1 chen:1 easier:1 simply:1 visual:3 prevents:1 partially:1 chang:1 springer:2 acm:1 ma:1 weston:1 goal:2 identity:3 sized:1 towards:1 maxout:1 labelled:1 bennett:1 experimentally:1 carreira:1 specifically:1 surpass:1 uyar:1 total:1 mincuts:1 pas:5 invariance:3 formally:1 select:1 ilsvrc:3 internal:1 support:2 evaluate:2 avoiding:1 srivastava:1 |
5,896 | 6,334 | Flexible Models for Microclustering with
Application to Entity Resolution
Giacomo Zanella?
Department of Decision Sciences
Bocconi University
Brenda Betancourt?
Department of Statistical Science
Duke University
[email protected]
[email protected]
Hanna Wallach
Microsoft Research
[email protected]
Jeffrey Miller
Department of Biostatistics
Harvard University
Abbas Zaidi
Department of Statistical Science
Duke University
[email protected]
[email protected]
Rebecca C. Steorts
Departments of Statistical Science and Computer Science
Duke University
[email protected]
Abstract
Most generative models for clustering implicitly assume that the number of data
points in each cluster grows linearly with the total number of data points. Finite
mixture models, Dirichlet process mixture models, and Pitman?Yor process mixture
models make this assumption, as do all other infinitely exchangeable clustering
models. However, for some applications, this assumption is inappropriate. For
example, when performing entity resolution, the size of each cluster should be
unrelated to the size of the data set, and each cluster should contain a negligible
fraction of the total number of data points. These applications require models that
yield clusters whose sizes grow sublinearly with the size of the data set. We address
this requirement by defining the microclustering property and introducing a new
class of models that can exhibit this property. We compare models within this class
to two commonly used clustering models using four entity-resolution data sets.
1
Introduction
Many clustering applications require models that assume cluster sizes grow linearly with the size of the
data set. These applications include topic modeling, inferring population structure, and discriminating
among cancer subtypes. Infinitely exchangeable clustering models, including finite mixture models,
Dirichlet process mixture models, and Pitman?Yor process mixture models, all make this lineargrowth assumption, and have seen numerous successes when used in these contexts. For other clustering applications, such as entity resolution, this assumption is inappropriate. Entity resolution (including record linkage and de-duplication) involves identifying duplicate2 records in noisy databases [1, 2],
traditionally by directly linking records to one another. Unfortunately, this traditional approach is
computationally infeasible for large data sets?a serious limitation in ?the age of big data? [1, 3]. As a
?
Giacomo Zanella and Brenda Betancourt are joint first authors.
In the entity resolution literature, the term ?duplicate records? does not mean that the records are identical,
but rather that the records are corrupted, degraded, or otherwise noisy representations of the same entity.
2
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
result, researchers increasingly treat entity resolution as a clustering problem, where each entity is implicitly associated with one or more records and the inference goal is to recover the latent entities (clusters) that correspond to the observed records (data points) [4, 5, 6]. In contrast to other clustering applications, the number of data points in each cluster should remain small, even for large data sets. Applications like this require models that yield clusters whose sizes grow sublinearly with the total number
of data points [7]. To address this requirement, we define the microclustering property in section 2 and,
in section 3, introduce a new class of models that can exhibit this property. In section 4, we compare
two models within this class to two commonly used infinitely exchangeable clustering models.
2
The Microclustering Property
To cluster N data points x1 , . . . , xN using a partition-based Bayesian clustering model, one first
places a prior over partitions of [N ] = {1, . . . , N }. Then, given a partition CN of [N ], one models the
data points in each part c ? CN as jointly distributed according to some chosen distribution. Finally,
one computes the posterior distribution over partitions and, e.g., uses it to identify probable partitions
of [N ]. Mixture models are a well-known type of partition-based Bayesian clustering model, in which
CN is implicitly represented by a set of cluster assignments z1 , . . . , zN . These cluster assignments
can be regarded as the first N elements of an infinite sequence z1 , z2 , . . ., drawn a priori from
??H
iid
and z1 , z2 , . . . | ? ? ?,
(1)
P
where H is a prior over ? and ? is a vector of mixture weights with l ?l = 1 and ?l ? 0 for
all l. Commonly used mixture models include (a) finite mixtures where the dimensionality of ?
is fixed and H is usually a Dirichlet distribution; (b) finite mixtures where the dimensionality of
? is a random variable [8, 9]; (c) Dirichlet process (DP) mixtures where the dimensionality of ?
is infinite [10]; and (d) Pitman?Yor process (PYP) mixtures, which generalize DP mixtures [11].
Equation 1 implicitly defines a prior over partitions of N = {1, 2, . . .}. Any random partition CN of
N induces a sequence of random partitions (CN : N = 1, 2, . . .), where CN is a partition of [N ]. Via
the strong law of large numbers, the cluster sizes in any such sequence obtained via equation 1 grow
PN
linearly with N because, with probability one, for all l, N1 n=1 I(zn = l) ? ?l as N ? ?, where
I(?) denotes the indicator function. Unfortunately, this linear growth assumption is not appropriate
for entity resolution and other applications that require clusters whose sizes grow sublinearly with N .
To address this requirement, we therefore define the microclustering property: A sequence of random
partitions (CN : N = 1, 2, . . .) exhibits the microclustering property if MN is op (N ), where MN is
the size of the largest cluster in CN , or, equivalently, if MN / N ? 0 in probability as N ? ?.
A clustering model exhibits the microclustering property if the sequence of random partitions implied
by that model satisfies the above definition. No mixture model can exhibit the microclustering property
(unless its parameters are allowed to vary with N ). In fact, Kingman?s paintbox theorem [12, 13] implies that any exchangeable partition of N, such as a partition obtained using equation 1, is either equal
to the trivial partition in which each part contains one element or satisfies lim infN ?? MN / N > 0
with positive probability. By Kolmogorov?s extension theorem, a sequence of random partitions
(CN : N = 1, 2, . . .) corresponds to an exchangeable random partition of N whenever (a) each CN
is finitely exchangeable (i.e., its probability is invariant under permutations of {1, . . . , N }) and (b)
the sequence is projective (also known as consistent in distribution)?i.e., if N 0 < N , the distribution
over CN 0 coincides with the marginal distribution over partitions of [N 0 ] induced by the distribution
over CN . Therefore, to obtain a nontrivial model that exhibits the microclustering property, we must
sacrifice either (a) or (b). Previous work [14] sacrificed (a); in this paper, we instead sacrifice (b).
Sacrificing finite exchangeability and sacrificing projectivity have very different consequences. If a
partition-based Bayesian clustering model is not finitely exchangeable, then inference will depend on
the order of the data points. For most applications, this consequence is undesirable?there is no reason
to believe that the order of the data points is meaningful. In contrast, if a model lacks projectivity,
then the implied joint distribution over a subset of the data points in a data set will not be the same as
the joint distribution obtained by modeling the subset directly. In the context of entity resolution, sacrificing projectivity is a more natural and less restrictive choice than sacrificing finite exchangeability.
2
3
Kolchin Partition Models for Microclustering
We introduce a new class of Bayesian models for microclustering by placing a prior on the number of
clusters K and, given K, modeling the cluster sizes N1 , . . . , NK directly. We start by defining
K ? ? and
iid
N1 , . . . , NK | K ? ?,
(2)
where ? = (?1 , ?2 , . . . ) and ? = (?1 , ?2 , . . . ) are probability distributions over N = {1, 2, . . .}.
PK
We then define N =
k=1 Nk and, given N1 , . . . , NK , generate a set of cluster assignments z1 , . . . , zN by drawing a vector uniformly at random from the set of permutations of
(1, . . . , 1, 2, . . . , 2, . . . . . . , K, . . . , K ). The cluster assignments z1 , . . . , zN induce a random par| {z } | {z }
| {z }
N1 times
N2 times
NK times
tition CN of [N ], where N is itself a random variable?i.e., CN is a random partition of a random
number of elements. We refer to the resulting class of marginal distributions over CN as Kolchin
partition (KP) models [15, 16] because the form of equation 2 is closely related to Kolchin?s representation theorem for Gibbs-type partitions (see, e.g., 16, theorem 1.2). For appropriate choices of ?
and ?, KP models can exhibit the microclustering property (see appendix B for an example).
S?
If CN denotes the set of all possible partitions of [N ], then N =1 CN is the
S?set of all possible
partitions of [N ] for all N ? N. The probability of any given partition CN ? N =1 CN is
!
Y
|CN |! ?|CN |
P (CN ) =
|c|! ?|c| ,
(3)
N!
c?CN
where | ? | denotes the cardinality of a set, |CN | is the number of clusters in CN , and |c| is the
number of elements in cluster c. In practice, however, N
Conditioned on
Q is usually observed.
N , a KP model implies that P (CN | N ) ? |CN |! ?|CN |
|c|!
?
.
Equation
3 leads to a
|c|
c?CN
?reseating algorithm??much like the Chinese restaurant process (CRP)?derived by sampling from
P (CN | N, CN \n), where CN \n is the partition obtained by removing element n from CN :
? for n = 1, . . . , N , reassign element n to
? an existing cluster c ? CN \n with probability ? (|c| + 1)
? or a new cluster with probability ? (|CN \n| + 1)
?(|c|+1)
?|c|
?(|CN\n|+1)
?|CN\n| ?1 .
We can use this reseating algorithm to draw samples from P (CN | N ); however, unlike the CRP, it
does not produce an exact sample if it is used to incrementally construct a partition from the empty
set. In practice, this limitation does not lead to any negative consequences because standard posterior
inference sampling methods do not rely on this property. When a KP model is used as the prior in a
partition-based clustering model?e.g., as an alternative to equation 1?the resulting Gibbs sampling
algorithm for CN is similar to this reseating algorithm, but accompanied by likelihood terms. Unfortunately, this algorithm is slow for large data sets. In appendix C, we therefore propose a faster Gibbs
sampling algorithm?the chaperones algorithm?that is particularly well suited to microclustering.
In sections 3.1 and 3.2, we introduce two related KP models for microclustering, and in section 3.4
we explain how KP models can be applied in the context of entity resolution with categorical data.
3.1
The NBNB Model
We start with equation 3 and define
? = NegBin (a, q)
and
? = NegBin (r, p) ,
(4)
where NegBin(a, q) and NegBin(r, p) are negative binomial distributions truncated to N =
{1, 2, . . . }. We assume that a > 0 and q ? (0, 1) are fixed hyperparameters, while r and p are
distributed as r ? Gam(?r , sr ) and p ? Beta(up , vp ) for fixed ?r , sr , up and vp .3 We refer to the
resulting marginal distribution over CN as the negative binomial?negative binomial (NBNB) model.
3
We use the shape-and-rate parameterization of the gamma distribution.
3
7
8
9
?4
?2
6
?6
log(M N / N)
?4
?6
log(M N / N)
5
5
6
log(N)
7
8
9
log(N)
Figure 1: The NBNB (left) and NBD (right) models appear to exhibit the microclustering property.
By substituting equation 4 into equation 3, we obtain the probability of CN conditioned N :
P (CN | N, a, q, r, p) ? ? (|CN | + a) ? |CN |
Y ? (|c| + r)
,
? (r)
(5)
c?CN
r
q (1?p)
where ? = 1?(1?p)
r . We provide the complete derivation of equation 5, along with the conditional
posterior distributions over r and p, in appendix A.2. Posterior inference for the NBNB model involves
alternating between (a) sampling CN from P (CN | N, a, q, r, p) using the chaperones algorithm and
(b) sampling r and p from their respective conditional posteriors using, e.g., slice sampling [17].
3.2
The NBD Model
Although ? = NegBin (a, q) will yield plausible values of K, ? = NegBin (r, p) may not be
sufficiently flexible to capture realistic properties of N1 , . . . , NK , especially when K is large. For
example, in a record-linkage application involving two otherwise noise-free databases containing
thousands of records, K will be large and each Nk will be at most two. A negative binomial
distribution cannot capture this property. We therefore define a second KP model?the negative
binomial?Dirichlet (NBD) model?by taking a nonparametric approach to modeling N1 , . . . , NK
and drawing ? from an infinite-dimensional Dirichlet distribution over the positive integers:
? = NegBin (a, q) and ? | ?, ?(0) ? Dir ?, ?(0) ,
(6)
(0)
(0)
where ? > 0 is a fixed concentration parameter and ?(0) = (?1 , ?2 , ? ? ? ) is a fixed base measure
P?
(0)
(0)
with m=1 ?m = 1 and ?m ? 0 for all m. The probability of CN conditioned on N and ? is
Y
P (CN | N, a, q, ?) ? ? (|CN | + a) q |CN |
|c|! ?|c| .
(7)
c?CN
Posterior inference for the NBD model involves alternating between (a) sampling CN from
P (CN | N, a, q, ?) using the chaperones algorithm and (b) sampling ? from its conditional posterior:
(0)
(0)
? | CN , ?, ?(0) ? Dir ? ?1 + L1 , ? ?2 + L2 , . . . ,
(8)
where Lm is the number of clusters of size m in CN . Although ? is an infinite-dimensional
vector, only the first N elements affect P (CN | a, q, ?). Therefore, it is sufficient to sample the
PN
(N + 1)-dimensional vector (?1 , . . . , ?N , 1 ? m=1 ?m ) from equation 8, modified accordingly,
and retain only ?1 , . . . , ?N . We provide complete derivations of equations 7 and 8 in appendix A.3.
3.3
The Microclustering Property for the NBNB and NBD Models
Figure 1 contains empirical evidence suggesting that the NBNB and NBD models both exhibit the microclustering property. For each model, we generated samples of MN / N for N = 100, . . . , 104 . For
the NBNB model, we set a = 1, q = 0.5, r = 1, and p = 0.5 and generated the samples using rejection sampling. For the NBD model, we set a = 1, q = 0.5, and ? = 1 and set ?(0) to be a geometric
distribution over N = {1, 2, . . .} with a parameter of 0.5. We generated the samples using MCMC
methods. For both models, MN / N appears to converge to zero in probability as N ? ?, as desired.
In appendix B, we also prove that a variant of the NBNB model exhibits the microclustering property.
4
3.4
Application to Entity Resolution
KP models can be used to perform entity resolution. In this context, the data points x1 , . . . , xN are observed records and the K clusters are latent entities. If each record consists of F categorical fields, then
CN ? KP model
? f k | ?f , ? f ? Dir ?f , ? f
(9)
zn ? ?(CN , n)
xf n | zn , ? f 1 , . . . , ? f K ? Cat (? f zn )
(10)
(11)
(12)
for f = 1, . . . , F , k = 1, . . . , K, and n = 1, . . . , N , where ?(CN , n) maps the nth record to a latent
cluster assignment zn according to CN . We assume that ?f > 0 is distributed as ?f ? Gam (1, 1),
while ? f is fixed. Via Dirichlet?multinomial conjugacy, we can marginalize over ? 11 , . . . , ? F K to
obtain a closed-form expression for P (x1 , . . . , xN | z1 , . . . , zN , ?f , ? f ). Posterior inference involves
alternating between (a) sampling CN from P (CN | x1 , . . . , xN , ?f ) using the chaperones algorithm
accompanied by appropriate likelihood terms, (b) sampling the parameters of the KP model from
their conditional posteriors, and (c) sampling ?f from its conditional posterior using slice sampling.
4
Experiments
In this section, we compare two entity resolution models based on the NBNB model and the NBD
model to two similar models based on the DP mixture model [10] and the PYP mixture model [11].
All four models use the likelihood in equations 10 and 12. For the NBNB model
p and the NBD model,
we set a and q to reflect a weakly informative prior belief that E[K] = Var[K] = N2 . For the
NBNB model, we set ?r = sr = 1 and up = vp = 2.4 For the NBD model, we set ? = 1 and set ?(0)
to be a geometric distribution over N = {1, 2, . . .} with a parameter of 0.5. This base measure reflects
a prior belief that E[Nk ] = 2. Finally, to ensure a fair comparison between the two different classes
of model, we set the DP and PYP concentration parameters to reflect a prior belief that E[K] = N2 .
We assess how well each model ?fits? four data sets typical of those arising in real-world entity resolution applications. For each data set, we consider four statistics: (a) the number of singleton clusters,
(b) the maximum cluster size, (c) the mean cluster size, and (d) the 90th percentile of cluster sizes.
We compare each statistic?s true value to its posterior distribution according to each of the models.
For each model and data set combination, we also consider five entity-resolution summary statistics:
(a) the posterior expected number of clusters, (b) the posterior standard error, (c) the false negative
rate, (d) the false discovery rate, and (e) the posterior expected value of ?f = ? for f = 1, . . . , F .
The false negative and false discovery rates are both invariant under permutations of 1, . . . , K [5, 18].
4.1
Data Sets
We constructed four realistic data sets, each consisting of N records associated with K entities.
Italy: We derived this data set from the Survey on Household Income and Wealth, conducted by the
Bank of Italy every two years. There are nine categorical fields, including year of birth, employment
status, and highest level of education attained. Ground truth is available via unique identifiers based
upon social security numbers; roughly 74% of the clusters are singletons. We used the 2008 and 2010
databases from the Fruili region to create a record-linkage data set consisting of N = 789 records;
each Nk is at most two. We discarded the records themselves, but preserved the number of fields, the
empirical distribution of categories for each field, the number of clusters, and the cluster sizes. We
then generated synthetic records using equations 10 and 12. We created three variants of this data set,
corresponding to ? = 0.02, 0.05, 0.1. For all three, we used the empirical distribution of categories for
field f as ? f . By generating synthetic records in this fashion, we preserve the pertinent characteristics
of the original data, while making it easy to isolate the impacts of the different priors over partitions.
NLTCS5000: We derived this data set from the National Long Term Care Survey (NLTCS)5 ?a
longitudinal survey of older Americans, conducted roughly every six years. We used four of the
4
5
We used p ? Beta (2, 2) because a uniform prior implies an unrealistic prior belief that E[Nk ] = ?.
http://www.nltcs.aas.duke.edu/
5
available fields: date of birth, sex, state of residence, and regional office. We split date of birth into
three separate fields: day, month, and year. Ground truth is available via social security numbers;
roughly 68% of the clusters are singletons. We used the 1982, 1989, and 1994 databases and
down-sampled the records, preserving the proportion of clusters of each size and the maximum
cluster size, to create a record-linkage data set of N = 5, 000 records; each Nk is at most three. We
then generated synthetic records using the same approach that we used to create the Italy data set.
Syria2000 and SyriaSizes: We constructed these data sets from data collected by four human-rights
groups between 2011 and 2014 on people killed in the Syrian conflict [19, 20]. Hand-matched
ground truth is available from the Human Rights Data Analysis Group. Because the records were
hand matched, the data are noisy and potentially biased. Performing entity resolution is non-trivial
because there are only three categorical fields: gender, governorate, and date of death. We split date
of death, which is present for most records, into three separate fields: day, month, and year. However,
because the records only span four years, the year field conveys little information. In addition, most
records are male, and there are only fourteen governorates. We created the Syria2000 data set by
down-sampling the records, preserving the proportion of clusters of each size, to create a data set
of N = 2, 000 records; the maximum cluster size is five. We created the SyriaSizes data set by
down-sampling the records, preserving some of the larger clusters (which necessarily contain withindatabase duplications), to create a data set of N = 6, 700 records; the maximum cluster size is ten.
We provide the empirical distribution over cluster sizes for each data set in appendix D. We generated
synthetic records for both data sets using the same approach that we used to create the Italy data set.
4.2
Results
We report the results of our experiments in table 1 and figure 2. The NBNB and NBD models
outperformed the DP and PYP models for almost all variants of the Italy and NLTCS5000 data sets.
In general, the NBD model performed the best of the four, and the differences between the models?
performance grew as the value of ? increased. For the Syria2000 and SyriaSizes data sets, we see no
consistent pattern to the models? abilities to recover the true values of the data-set statistics. Moreover,
all four models had poor false negative rates, and false discovery rates?most likely because these
data sets are extremely noisy and contain very few fields. We suspect that no entity resolution model
would perform well for these data sets. For three of the four data sets, the exception being the
Syria2000 data set, the DP model and the PYP model both greatly overestimated the number of
clusters for larger values of ?. Taken together, these results suggest that the flexibility of the NBNB
and NBD models make them more appropriate choices for most entity resolution applications.
5
Summary
Infinitely exchangeable clustering models assume that cluster sizes grow linearly with the size of the
data set. Although this assumption is reasonable for some applications, it is inappropriate for others.
For example, when entity resolution is treated as a clustering problem, the number of data points in
each cluster should remain small, even for large data sets. Applications like this require models that
yield clusters whose sizes grow sublinearly with the size of the data set. We introduced the microclustering property as one way to characterize models that address this requirement. We then introduced a
highly flexible class of models?KP models?that can exhibit this property. We presented two models
within this class?the NBNB model and the NBD model?and showed that they are better suited
to entity resolution applications than two infinitely exchangeable clustering models. We therefore
recommend KP models for applications where the size of each cluster should be unrelated to the size
of the data set, and each cluster should contain a negligible fraction of the total number of data points.
Acknowledgments
We thank Tamara Broderick, David Dunson, Merlise Clyde, and Abel Rodriguez for conversations
that helped form the ideas in this paper. In particular, Tamara Broderick played a key role in developing the idea of microclustering. We also thank the Human Rights Data Analysis Group for providing
us with data. This work was supported in part by NSF grants SBE-0965436, DMS-1045153, and
IIS-1320219; NIH grant 5R01ES017436-05; the John Templeton Foundation; the Foerster-Bernstein
Postdoctoral Fellowship; the UMass Amherst CIIR; and an EPSRC Doctoral Prize Fellowship.
6
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.35
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
7
1.30
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
2.5
3.0
3.5
4.0
7
1.30
1.35
500
8
1.40
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
1.5
2.0
2.5
90th Percentile of Cluster Sizes
?
?
?
?
?
2.0
?
?
?
?
90th Percentile of Cluster Sizes
?
2.0
6
Mean Cluster Size
1.25
?
?
1.8
5
4
?
?
?
?
?
?
?
?
?
?
?
?
?
1.6
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
3
Maximum Cluster Size
?
?
?
?
?
?
?
?
?
?
?
1.4
1.58 1.60 1.62 1.64 1.66 1.68 1.70
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
2
450
?
1.2
9
?
?
?
?
?
?
?
?
?
1.0
8
?
?
?
?
?
?
90th Percentile of Cluster Sizes
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
7
Mean Cluster Size
?
?
?
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
1.25
?
1.20
?
?
?
?
?
?
?
?
?
?
?
?
?
3.0
?
?
?
?
?
?
?
?
?
2.8
?
?
?
?
1.15
6
?
?
?
?
?
?
?
2.6
?
?
?
?
?
?
?
?
?
?
2.4
?
?
?
?
?
?
?
?
2.2
?
?
?
?
?
?
?
2.0
?
?
?
?
?
Mean Cluster Size
5
?
?
?
?
?
?
?
?
?
?
?
90th Percentile of Cluster Sizes
?
?
?
?
1.10
4
400
Singleton Clusters
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
?
?
?
?
1.8
?
?
Maximum Cluster Size
?
3
?
?
?
?
1.7
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
10
?
?
?
?
?
?
8
?
?
?
?
?
6
?
?
?
?
?
?
?
?
?
?
?
?
1.6
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.5
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Mean Cluster Size
?
?
?
4
?
?
?
?
?
?
?
Maximum Cluster Size
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
350
?
?
?
?
?
?
?
?
2
1800
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.4
18
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
16
1700
Singleton Clusters
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
14
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
1600
?
?
?
?
?
?
12
1600
?
?
?
?
?
?
?
?
?
?
?
10
1400
?
?
?
?
8
1200
Singleton Clusters
?
?
?
?
?
?
?
?
?
?
6
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
1000
?
?
?
?
?
?
?
?
Maximum Cluster Size
3000
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
DP(0.02)
PYP(0.02)
NBNB(0.02)
NDB(0.02)
DP(0.05)
PYP(0.05)
NBNB(0.05)
NDB(0.05)
DP(0.1)
PYP(0.1)
NBNB(0.1)
NDB(0.1)
2500
Singleton Clusters
2000
?
?
?
?
(a) Italy: NBD model > NBNB model > PYP mixture model > DP mixture model.
(b) NLTCS5000: NBD model > NBNB model > PYP mixture model > DP mixture model.
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
(c) Syria2000: the models perform similarly because there are so few fields.
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
(d) SyriaSizes: the models perform similarly because there are so few fields.
Figure 2: Box plots depicting the true value (dashed line) of each data-set statistic for each variant of
each data set, as well as its posterior distribution according to each of the four entity resolution models.
Table 1: Entity-resolution summary statistics?the posterior expected number of clusters, the posterior
standard error, the false negative rate (lower is better), the false discovery rate (lower is better), and
the posterior expected value of ??for each variant of each data set and each of the four models.
Data Set
True K
Variant
Model
E[K]
Std. Err.
FNR
FDR
E[?]
Italy
587
? = 0.02
DP
PYP
NBNB
NBD
594.00
593.90
591.00
590.50
4.51
4.52
4.43
3.64
0.07
0.07
0.04
0.03
0.03
0.03
0.03
0.00
0.02
0.02
0.02
0.02
? = 0.05
DP
PYP
NBNB
NBD
601.60
601.50
596.40
592.60
5.89
5.90
5.79
5.20
0.13
0.13
0.11
0.09
0.03
0.03
0.04
0.04
0.03
0.04
0.04
0.04
? = 0.1
DP
PYP
NBNB
NBD
617.40
617.40
610.90
596.60
7.23
7.22
7.81
9.37
0.27
0.27
0.24
0.18
0.06
0.05
0.06
0.05
0.07
0.07
0.08
0.10
? = 0.02
DP
PYP
NBNB
NBD
3021.70
3018.70
3037.80
3028.20
24.96
25.69
25.18
5.65
0.02
0.03
0.02
0.01
0.11
0.11
0.07
0.09
0.03
0.03
0.02
0.03
? = 0.05
DP
PYP
NBNB
NBD
3024.00
3045.80
3040.90
3039.30
26.15
23.66
24.86
10.17
0.05
0.05
0.04
0.03
0.13
0.10
0.06
0.07
0.06
0.05
0.05
0.06
? = 0.1
DP
PYP
NBNB
NBD
3130.50
3115.10
3067.30
3049.10
21.44
25.73
25.31
16.48
0.12
0.13
0.11
0.09
0.09
0.10
0.08
0.08
0.10
0.10
0.11
0.12
? = 0.02
DP
PYP
NBNB
NBD
1695.20
1719.70
1726.80
1715.20
25.40
36.10
27.96
51.56
0.70
0.71
0.70
0.67
0.27
0.26
0.28
0.28
0.07
0.04
0.05
0.02
? = 0.05
DP
PYP
NBNB
NBD
1701.80
1742.90
1738.30
1711.40
31.15
24.33
25.48
47.10
0.77
0.75
0.74
0.69
0.31
0.32
0.31
0.32
0.07
0.04
0.04
0.03
? = 0.1
DP
PYP
NBNB
NBD
1678.10
1761.20
1779.40
1757.30
40.56
39.38
29.84
73.60
0.81
0.81
0.77
0.74
0.19
0.22
0.26
0.25
0.18
0.08
0.04
0.03
? = 0.02
DP
PYP
NBNB
NBD
4175.70
4234.30
4108.70
3979.50
66.04
68.55
70.56
70.85
0.65
0.64
0.65
0.68
0.17
0.19
0.19
0.20
0.01
0.01
0.01
0.03
? = 0.05
DP
PYP
NBNB
NBD
4260.00
4139.10
4047.10
3863.90
77.18
104.22
55.18
68.05
0.71
0.75
0.73
0.75
0.21
0.18
0.20
0.22
0.02
0.04
0.04
0.07
? = 0.1
DP
PYP
NBNB
NBD
4507.40
4540.30
4400.60
4251.90
82.27
100.53
111.91
203.23
0.80
0.80
0.80
0.82
0.19
0.20
0.23
0.25
0.03
0.03
0.03
0.04
NLTCS5000
Syria2000
SyriaSizes
3,061
1,725
4,075
8
References
[1] P. Christen. Data Matching: Concepts and Techniques for Record Linkage, Entity Resolution,
and Duplicate Detection. Springer, 2012.
[2] P. Christen. A survey of indexing techniques for scalable record linkage and deduplication.
IEEE Transactions on Knowledge and Data Engineering, 24(9), 2012.
[3] W. E. Winkler. Overview of record linkage and current research directions. Technical report,
U.S. Bureau of the Census Statistical Research Division, 2006.
[4] R. C. Steorts, R. Hall, and S. E. Fienberg. A Bayesian approach to graphical record linkage and
de-duplication. Journal of the American Statistical Society, In press.
[5] R. C. Steorts. Entity resolution with empirically motivated priors. Bayesian Analysis, 10(4):849?
875, 2015.
[6] R. C. Steorts, R. Hall, and S. E. Fienberg. SMERED: A Bayesian approach to graphical record
linkage and de-duplication. Journal of Machine Learning Research, 33:922?930, 2014.
[7] T. Broderick and R. C. Steorts. Variational bayes for merging noisy databases. In NIPS 2014
Workshop on Advances in Variational Inference, 2014. arXiv:1410.4792.
[8] S. Richardson and P. J. Green. On Bayesian analysis of mixtures with an unknown number of
components. Journal of the Royal Statistical Society Series B, pages 731?792, 1997.
[9] J. W. Miller and M. T. Harrison. Mixture models with a prior on the number of components.
arXiv:1502.06241, 2015.
[10] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[11] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species
sampling mixture models. Statistica Sinica, 13(4):1211?1236, 2003.
[12] J. F. .C Kingman. The representation of partition structures. Journal of the London Mathematical
Society, 2(2):374?380, 1978.
[13] D. Aldous. Exchangeability and related topics. ?cole d??t? de Probabilit?s de Saint-Flour
XIII?1983, pages 1?198, 1985.
[14] H. M. Wallach, S. Jensen, L. Dicker, and K. A. Heller. An alternative prior process for
nonparametric Bayesian clustering. In Proceedings of the 13th International Conference on
Artificial Intelligence and Statistics, 2010.
[15] V. F. Kolchin. A problem of the allocation of particles in cells and cycles of random permutations.
Theory of Probability & Its Applications, 16(1):74?90, 1971.
[16] J. Pitman. Combinatorial stochastic processes. ?cole d??t? de Probabilit?s de Saint-Flour
XXXII?2002, 2006.
[17] R. M. Neal. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[18] R. C. Steorts, S. L. Ventura, M. Sadinle, and S. E. Fienberg. A comparison of blocking methods
for record linkage. In International Conference on Privacy in Statistical Databases, pages
253?268, 2014.
[19] M. Price, J. Klingner, A. Qtiesh, and P. Ball. Updated statistical analysis of documentation of
killings in the Syrian Arab Republic, 2013. United Nations Office of the UN High Commissioner
for Human Rights.
[20] M. Price, J. Klingner, A. Qtiesh, and P. Ball. Updated statistical analysis of documentation of
killings in the Syrian Arab Republic. Human Rights Data Analysis Group, Geneva, 2014.
9
| 6334 |@word proportion:2 sex:1 contains:2 uma:1 series:1 united:1 longitudinal:1 existing:1 err:1 current:1 z2:2 must:1 john:1 realistic:2 partition:31 informative:1 shape:1 pertinent:1 plot:1 generative:1 intelligence:1 parameterization:1 accordingly:1 prize:1 record:38 five:2 mathematical:1 along:1 constructed:2 commissioner:1 beta:2 prove:1 consists:1 privacy:1 introduce:3 sacrifice:2 expected:4 sublinearly:4 roughly:3 themselves:1 little:1 inappropriate:3 cardinality:1 spain:1 unrelated:2 matched:2 moreover:1 biostatistics:1 every:2 nation:1 growth:1 exchangeable:9 grant:2 appear:1 positive:2 negligible:2 engineering:1 treat:1 consequence:3 doctoral:1 wallach:2 projective:1 zanella:3 unique:1 acknowledgment:1 practice:2 ciir:1 probabilit:2 empirical:4 matching:1 induce:1 suggest:1 cannot:1 undesirable:1 marginalize:1 klingner:2 context:4 www:1 map:1 survey:4 resolution:24 identifying:1 regarded:1 population:1 traditionally:1 updated:2 annals:1 exact:1 duke:7 us:1 harvard:2 element:7 documentation:2 particularly:1 std:1 database:6 blocking:1 observed:3 role:1 epsrc:1 capture:2 thousand:1 region:1 cycle:1 highest:1 projectivity:3 broderick:3 abel:1 employment:1 depend:1 weakly:1 upon:1 division:1 joint:3 represented:1 cat:1 kolmogorov:1 sacrificed:1 derivation:2 london:1 kp:12 artificial:1 birth:3 whose:4 larger:2 plausible:1 drawing:2 otherwise:2 ability:1 statistic:8 winkler:1 richardson:1 jointly:1 noisy:5 itself:1 sequence:7 net:1 propose:1 date:4 flexibility:1 cluster:64 requirement:4 empty:1 produce:1 generating:1 stat:3 finitely:2 op:1 strong:1 involves:4 implies:3 direction:1 closely:1 stochastic:1 human:5 education:1 require:5 probable:1 subtypes:1 extension:1 sufficiently:1 hall:2 ground:3 lm:1 substituting:1 vary:1 microclustering:20 outperformed:1 combinatorial:1 cole:2 largest:1 create:6 reflects:1 weighted:1 modified:1 rather:1 pn:2 exchangeability:3 office:2 derived:3 likelihood:3 greatly:1 contrast:2 inference:7 among:1 flexible:3 priori:1 marginal:3 equal:1 construct:1 field:13 sampling:18 identical:1 placing:1 report:2 others:1 recommend:1 serious:1 duplicate:2 few:3 merlise:1 xiii:1 pyp:67 gamma:1 preserve:1 national:1 consisting:2 jeffrey:1 microsoft:1 n1:7 detection:1 highly:1 flour:2 male:1 mixture:24 respective:1 unless:1 desired:1 sacrificing:4 increased:1 hsph:1 modeling:4 zn:9 assignment:5 introducing:1 republic:2 subset:2 uniform:1 conducted:2 characterize:1 corrupted:1 dir:3 giacomo:3 synthetic:4 clyde:1 international:2 amherst:1 discriminating:1 retain:1 overestimated:1 together:1 infn:1 reflect:2 containing:1 tition:1 american:2 kingman:2 suggesting:1 de:7 singleton:7 accompanied:2 performed:1 helped:1 closed:1 deduplication:1 start:2 recover:2 bayes:1 ass:1 degraded:1 characteristic:1 miller:2 yield:4 correspond:1 identify:1 vp:3 generalize:1 bayesian:9 killing:2 iid:2 researcher:1 explain:1 whenever:1 definition:2 tamara:2 james:1 conveys:1 dm:1 associated:2 sampled:1 lim:1 conversation:1 dimensionality:3 knowledge:1 arab:2 appears:1 attained:1 day:2 box:1 crp:2 hand:2 lack:1 incrementally:1 rodriguez:1 defines:1 believe:1 grows:1 contain:4 true:4 concept:1 alternating:3 death:2 neal:1 percentile:5 coincides:1 generalized:1 complete:2 l1:1 variational:2 nih:1 multinomial:1 empirically:1 overview:1 fourteen:1 linking:1 refer:2 gibbs:3 chaperone:4 steorts:6 similarly:2 killed:1 particle:1 had:1 base:2 posterior:18 showed:1 italy:7 aldous:1 negbin:7 success:1 seen:1 preserving:3 care:1 converge:1 dashed:1 ii:1 technical:1 faster:1 xf:1 xxxii:1 long:1 impact:1 involving:1 variant:6 scalable:1 dicker:1 zaidi:1 foerster:1 arxiv:2 abbas:1 cell:1 preserved:1 addition:1 fellowship:2 wealth:1 grow:7 harrison:1 biased:1 unlike:1 sr:3 nbd:28 regional:1 duplication:4 induced:1 isolate:1 suspect:1 integer:1 bernstein:1 split:2 easy:1 affect:1 restaurant:2 fit:1 idea:2 cn:63 ndb:48 expression:1 six:1 motivated:1 linkage:10 nine:1 reassign:1 nonparametric:2 ten:1 induces:1 category:2 generate:1 http:1 nsf:1 arising:1 group:4 key:1 four:13 drawn:1 fraction:2 year:7 place:1 almost:1 reasonable:1 residence:1 draw:1 decision:1 appendix:6 played:1 nontrivial:1 span:1 extremely:1 performing:2 department:5 developing:1 according:4 combination:1 poor:1 ball:2 remain:2 increasingly:1 templeton:1 making:1 invariant:2 indexing:1 census:1 taken:1 fienberg:3 computationally:1 equation:14 conjugacy:1 available:4 ishwaran:1 gam:2 appropriate:4 alternative:2 original:1 bureau:1 denotes:3 dirichlet:9 clustering:18 include:2 binomial:5 ensure:1 graphical:2 saint:2 household:1 restrictive:1 chinese:2 especially:1 society:3 implied:2 concentration:2 traditional:1 exhibit:11 dp:68 separate:2 thank:2 entity:29 topic:2 collected:1 trivial:2 reason:1 providing:1 equivalently:1 sinica:2 unfortunately:3 dunson:1 ventura:1 potentially:1 negative:10 fdr:1 unknown:1 perform:4 discarded:1 finite:6 truncated:1 defining:2 grew:1 rebecca:1 introduced:2 david:1 z1:6 security:2 conflict:1 barcelona:1 nip:2 address:4 usually:2 nltcs:2 pattern:1 including:3 green:1 royal:1 belief:4 unrealistic:1 natural:1 rely:1 treated:1 indicator:1 mn:6 nth:1 older:1 numerous:1 sethuraman:1 created:3 categorical:4 brenda:2 prior:15 literature:1 l2:1 geometric:2 discovery:4 heller:1 betancourt:2 law:1 par:1 permutation:4 limitation:2 allocation:1 var:1 sbe:1 age:1 foundation:1 sufficient:1 consistent:2 bank:1 cancer:1 summary:3 supported:1 christen:2 free:1 infeasible:1 taking:1 pitman:4 yor:3 distributed:3 slice:3 xn:4 world:1 computes:1 author:1 commonly:3 income:1 social:2 transaction:1 geneva:1 implicitly:4 status:1 postdoctoral:1 un:1 latent:3 table:2 depicting:1 hanna:2 necessarily:1 pk:1 statistica:2 linearly:4 big:1 noise:1 hyperparameters:1 n2:3 identifier:1 allowed:1 fair:1 x1:4 syrian:3 fashion:1 slow:1 inferring:1 theorem:4 removing:1 down:3 jensen:1 evidence:1 workshop:1 false:8 merging:1 conditioned:3 nk:12 rejection:1 suited:2 likely:1 infinitely:5 springer:1 aa:1 corresponds:1 truth:3 satisfies:2 gender:1 conditional:5 goal:1 month:2 price:2 fnr:1 infinite:4 typical:1 uniformly:1 total:4 specie:1 meaningful:1 exception:1 people:1 constructive:1 mcmc:1 |
5,897 | 6,335 | Deep Alternative Neural Network: Exploring
Contexts as Early as Possible for Action Recognition
Jinzhuo Wang, Wenmin Wang, Xiongtao Chen, Ronggang Wang, Wen Gao?
School of Electronics and Computer Engineering, Peking University
?
School of Electronics Engineering and Computer Science, Peking University
[email protected], [email protected]
[email protected], [email protected], [email protected]
Abstract
Contexts are crucial for action recognition in video. Current methods often mine
contexts after extracting hierarchical local features and focus on their high-order
encodings. This paper instead explores contexts as early as possible and leverages their evolutions for action recognition. In particular, we introduce a novel
architecture called deep alternative neural network (DANN) stacking alternative
layers. Each alternative layer consists of a volumetric convolutional layer followed
by a recurrent layer. The former acts as local feature learner while the latter is
used to collect contexts. Compared with feed-forward neural networks, DANN
learns contexts of local features from the very beginning. This setting helps to
preserve hierarchical context evolutions which we show are essential to recognize
similar actions. Besides, we present an adaptive method to determine the temporal
size for network input based on optical flow energy, and develop a volumetric
pyramid pooling layer to deal with input clips of arbitrary sizes. We demonstrate
the advantages of DANN on two benchmarks HMDB51 and UCF101 and report
competitive or superior results to the state-of-the-art.
1
Introduction
Contexts contribute semantic clues for action recognition in video. Current leading convolutional
neural networks (CNNs) [13, 22, 31] and its shifted version 3D CNNs [11, 28, 29] often aggregate
contexts in the late stage. More precisely, in the first layer of a typical CNN, receptive field (RF)
starts at the kernel size which is usually small and the outputs only extract local features. As the layer
goes deeper, RF expands and contexts start to be involved. These models need to be very deep [32] to
preserve rich context topologies and reach competitive trajectory-based works [16, 19, 20, 30]. We
speculate this is the main reason that going deeper with convolutions achieves better performance
on many visual recognition tasks [23, 26]. However, it is not wise to simply increase layer number
due to parameter burden. Besides, these models do not embed context evolutions of local features in
the forward flow which is essential for context mining [17, 18]. To this end, we attempt to explore
contexts as early as possible and investigate architectures for action recognition.
Our motivation also derives from the relations between CNNs and visual systems of brain since they
share many properties [9, 10]. One remarkable difference is that abundant recurrent connections
exist in the visual system of brain [3] while CNNs only have forward connections. Anatomical
evidences have shown that recurrent synapses typically outnumber feed-forward, top-down and
feedback synapses in the neocortex [4, 37]. This makes visual recognition tend to be a dynamic
procedure. Hence, we investigate to insert recurrent connections in the deployment of our architecture.
Recent works utilize recurrent neural networks (RNNs) with long-short term memory (LSTM) units
at the end CNN-based features of each frame to exploit semantic combinations [5, 25, 35]. These
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
methods can be regarded as inter-image context learner. In contrast, we attempt to apply recurrent
connections to each level of hierarchical features to aggregate their context evolutions. Similar
efforts have demonstrated effectivity for image analysis such as object recognition and scene parsing
[21, 17, 18]. We extend in temporal domain and study its potential for action recognition.
The main contribution of this paper is summarized as follows. First, we propose a deep alternative
neural network (DANN) for action recognition. DANN stacks alternative layers consisting of
volumetric convolutional layer and recurrent layer. The alternative deployment is used to preserve the
contexts of local features as early as possible and embed their evolutions in the hierarchical feature
learning procedure. Second, we introduce an adaptive method to determine the temporal size of input
video clip based on the density of optical flow energy. Instead of manual choices used in most deep
architectures, our method utilizes adaptive input video clips preserving long range dependencies,
while not breaking semantic structures. To cope with input video clips of arbitrary sizes, we develop
a volumetric pyramid pooling layer to resize the output to fixed-size before fully connected layers.
Finally, we conduct extensive experiments and demonstrate the benefits of our method with early
context exploration. On two challenging benchmarks HMDB51 and UCF101, we report competitive
or superior results to the state-of-the-art.
2
2.1
Deep Alternative Neural Network
Adaptive Network Input
The input size of deep networks in temporal domain is often determined empirically since it is hard to
evaluate all the choices. Previous methods often consider short intervals such as [11, 13, 27, 28] from
1 to 16 frames. Recent work [29] argues that human actions usually span tens or hundreds of frames
and contain characteristic patterns with long-term temporal structure. The authors use 60-frame as
the network input and demonstrate its advantage over 16-frame. However, it is still an ad hoc manner
and difficult to favor all the action classes. We introduce an adaptive method to automatically select
the most discriminative video fragments using the density of optical flow energy. We attempt to
preserve as much as motion information and appropriate range dependencies while not breaking their
semantic structures in temporal domain.
Figure 1: Sample video clip at every 3 frames of class ?golfswing? and its optical flow energy with
local minima and maxima landmarks where landmarks approximately correspond to motion change.
Many evidences show that motion energy intensity induced by human activity exhibits regular
periodicity [33]. This signal can be approximately estimated by optical flow computation as shown in
Figure 1, and is particularly suitable to address our temporal estimation due to: (1) the local minima
and maxima landmarks probably correspond to characteristic gesture and motion; (2) it is relatively
robust to changes in camera viewpoint. More specifically, we first compute the optical flow field
(vx , vy ) for each frame I from a video Q and define its flow energy as
X
e(I) =
kvx (x, y), vy (x, y)k2
(1)
(x,y)?P
where P is the pixel level set of selected interest points. The energy of Q is then obtained as E =
{e(I1 ), ? ? ? , e(It )}, which is further smoothed by a Gaussian filter to suppress noise. Subsequently,
we locate the local minima and maxima landmarks {t} of E and for each two consecutive landmarks
create a video fragment s by extracting the frames s = {It?1 , ? ? ? , It }. We average the fragment
2
length of each class and illustrate the distribution in Figure 2, which indicates that using a universal
length can not favor all classes. To deal with the different length of video clip, we adopt the idea
of spatial pyramid pooling (SPP) in [8] and extend to temporal domain, developing a volumetric
pyramid pooling (VPP) layer to transfer video clip of arbitrary size into a universal length in the last
alternative layer before fully connected layer, which is presented in Section 2.3.
Figure 2: Average fragment length of each class in UCF101 dataset.
2.2
Alternative Layer
The key component of DANN is the alternative layer (AL), which consists of a standard volumetric
convolutional layer followed by a designed recurrent layer. Specifically, volumetric convolution is
first performed to extract features from local spatiotemporal neighborhoods on feature maps in the
previous layers. Then a recurrent layer is applied to the output and iteratively proceeds for T times.
This procedure makes each unit evolve over discrete time steps and aggregate larger RFs. More
formally, the input of a unit at position (x, y, z) in the jth feature map of the ith AL at time t, denoted
as uxyz
ij (t), is given by
xyz
r xyz
uxyz
ij (t) = uij (0) + f (wij uij (t ? 1)) + bij
xyz
c
uxyz
ij (0) = f (w(i?1)j u(i?1)j )
(2)
xyz
where uxyz
ij (0) denotes the feed-forward output of volumetric convolutional layer, uij (t ? 1) is the
c
r
recurrent input of previous time, wk and wk are the vectorized feed-forward kernels and recurrent
kernels, bij is the bias for jth feature map in ith layer, f is defined as popular rectified linear unit
(ReLU) function followed by a local response normalization (LRN) [14]. The first term is the
output of volumetric convolution of previous layer and the second term is induced by the recurrent
connections. LRN mimics the lateral inhibition in the cortex where different features compete for
large responses.
Equation 2 describes the dynamic behavior of AL where contexts are involved after local features are
extracted. Unfolding recurrent connection for T time steps results in a feed-forward subnetwork of
depth T + 1 as shown in Figure 3. While the recurrent input evolves over iterations, the feed-forward
input remains the same in all iterations. When t = 0 only the feed forward input is present. The
effective RF of an AL unit in the feature maps of the previous layer expands when the iteration
number increases.
Figure 3: Illustrations of an alternative layer (left). The unfolding recurrent procedure is on the right.
The recurrent connections in AL provide two advantages. First, they enable every unit to incorporate
contexts in an arbitrarily large region in the current layer. As the time steps increase, the state of
every unit is influenced by other units in a larger and larger neighborhood in the current layer. As
3
a consequence, the size of regions that each unit can ?watch? in the input space also increases. In
standard volumetric convolutions, the size of effective RFs of the units in the current layer is fixed,
and ?watching? a larger region is only possible for units in higher layers. But unfortunately the
context seen by higher-level units cannot influence the states of the units in the current layer without
top-down connections. Second, the recurrent connections increase the network depth while keeping
the number of adjustable parameters constant by weight sharing, since AL consumes only extra
constant parameters of a recurrent kernel size compared with standard volumetric convolutional layer.
2.3
Volumetric Pyramid Pooling Layer
The AL accepts input video clips of arbitrary sizes and produces outputs of variable sizes. However,
the fully connected layers require fixed-length vectors. Similar phenomenon can be found in region
CNN (R-CNN) [6] where the input image patch is of arbitrary size. To adopt DANN for input video
clips of arbitrary sizes, we replace the last pooling layer with a volumetric pyramid pooling layer
(VPPL) inspired by the success of spatial pyramid pooling layer (SPPL) [8]. Figure 4 illustrates the
structure of VPPL. In each volumetric bin, we pool the responses of each kernel (throughout this
paper we use max pooling). The outputs of the volumetric pyramid pooling are kM -dimensional
vectors where M is the number of bins and k is the number of kernels in the last alternative layer.
The fixed-dimensional vectors are then sent to the fully connected layers.
Figure 4: A network structure with volumetric pyramid pooling layer (VPPL) to resize feature maps
of arbitrary size to fixed size.
With VPPL, the input video clips can be of any sizes. This allows not only arbitrary aspect ratios, but
also arbitrary scales. One can apply more compact video clips only containing semantic regions such
as action tubes in [7] to DANN with our VPPL to pursue potential improvement.
2.4
Overall Architecture
Figure 5: DANN has 6 alternative layers, 5 volumetric pooling layers, 1 volumetric pyramid pooling
layer, 3 fully conncected layers and a softmax layer. Number of kernels are denoted in each box.
Our network architecture DANN is illustrated in Figure 5. The network has 6 alternative layers with
64, 128, 256, 256, 512 and 512 kernel response maps, followed by a volumetric pyramid pooling
layer and 3 fully connected layers of size 2048 each. Following [28] we use 3 ? 3 ? 3 kernel for
volumetric convolutional layer and recurrent layers of all 6 alternative layers. After each alternative
layer, the network includes a ReLU and a volumetric max pooling layer. Max pooling kernels are of
size 2?2?2 except in the first layer, where it is 2?2?1. All of these volumetric convolutional layers
and recurrent layers are applied with appropriate padding and stride in both spatial and temporal
dimensions. VPPL is applied to resize the output of the last AL to fixed-size which is the input of
fully connected layers. Fully connected layers are followed by ReLU layers and a softmax at the end
of the network, which outputs class scores.
4
3
Implementation details
The major implementations of DANN including volumetric convolutions, recurrent layers and
optimizations are derived from Torch toolbox platform [2].
Data Augmentation. Inspired by the random spatial cropping during training [23], we apply the
corresponding augmentation to spatiotemporal dimension, which we call random clipping. During
training stage, given an input video, we first determine their temporal size t as discussed in Section
2.1. Then we randomly select point (x, y, z) to sample a video clip of fixed size 80 ? 80 ? t. A
common alternative is to pre-process data by using a sliding window approach to have pre-segmented
clips. However, this approach limits the amount of data when the windows are not overlapped as in
[28]. Another data augmentation method that we evaluate is a multi-scale cropping similar to [32].
Training. We use SGD applied to mini-batches with negative log likelihood criterion. The size of
mini-batch is set 30. Training is performed by minimizing the cross-entropy loss function using
the backpropagation through time (BPTT) algorithm [34]. This is equivalent to using the standard
BP algorithm on the time-unfolded network. The final gradient of a shared weight is the sum of its
gradients over all time steps. The initial learning rate for networks learned from scratch is 3 ? 10?3
and it is 3 ? 10?4 for networks fine-tuned from pre-trained models. The above schedule is used
together with 0.9 dropout ratio. The momentum is set to 0.9 and weight decay is initialized with
5 ? 10?3 and reduced by 10?1 factor at every decrease of the learning rate.
Testing. At test time, a video is also applied with temporal estimation in Section 2.1 and divided into
80 ? 80 ? t clips with a temporal stride of 4 frames, where t is the adaptive temporal size. Each clip
is further tested with 10 crops, namely 4 corners and the center, together with their horizontal flips.
The video-level score is obtained by averaging all the clip-level scores and crop scores.
4
4.1
Evaluations
Datasets
The evaluation is performed on UCF101 [24] and HMDB51 [15] benchmarks. Specifically, UCF101
contains 13K videos, annotated into 101 classes while HMDB51 includes 6.8K videos of 51 actions.
The evaluation protocol is the same for both datasets: the organisers provide three training and test
splits, and the performance is measured by the mean classification accuracy across the splits. Each
UCF101 split contains 9.5K training videos while HMDB51 split contains 3.7K training videos.
4.2
Quantitative Results
We first evaluate several experimental deployment choices and determine the common settings. Then
we study the impact of different configurations of our DANN and investigate the optimal architecture.
Finally, we report our best model and compare with state-of-the-art results.
Optical flow quality. We used three types of optical flow as input signal. The performance influence
is summarized in Table 1(a). We observe that sparse optical flow consistently outperforms RGB. The
use of TVL1 suggested in [32] allows an almost 20% increase in performance. This demonstrates
that action recognition is more easy to learn from motion information compared to raw pixel values.
Given such results, we choose TVL1 optical flow for all remaining experiments in this paper.
Data augmentation. Table 1(b) demonstrates the influence of data augmentation. Our baseline is
sliding window with 75% overlap. On UCF101 split 1 dataset, we find random clipping and multiscale clipping both outperform the baseline and their combination can further boost the performance.
Thus we use the combination strategy in the following experiments.
Temporal length. Another issue we discuss is that our DANN takes video clips with adaptive
temporal length, which is different from most existing architectures. We examine such setting by
comparing 6AL_VPPL_3FC with a new architecture 6AL_3FC using fixed-size temporal length of
16-frame, 32-frame and 64-frame, while removing VPPL. The performance gain by 6AL_VPPL_3FC
on UCF101 split 1 is approximate 4.2% as shown in Table 2(a). This result verifies the advantages of
our adaptive method to determine temporal length for network input.
5
Table 1: Performance comparison of different input modalities and data augmentation strategies on
UCF101 split1.
(a) Impact of optical flow quality.
Input
Clip-level Video-level
RGB
64.4
64.9
MPEG [12]
71.3
73.5
Brox [1]
76.7
77.2
TVL1 [36]
78.1
79.6
(b) Impact of data augmentation using TVL1.
Method
Clip-level Video-level
Sliding window
75.4
74.8
Random clipping
78.5
79.6
Multi-scale clipping
81.2
82.4
Combined
81.6
82.3
Additional training data. We conduct experiments to see if our spatio-temporal features learned on
one dataset can help to improve the accuracy of the other one. Such additional data is already known
to improve results in some gain [22]. The performance from scratch is 56.4% while fine-tuning
HMDB51 from UCF101 boosts the performance to 62.5%. Similar conclusion is demonstrated in
Table 2(b). We conclude that one can learn generic representations with DANN like C3D [28].
Table 2: Performance impact of temporal length choice and additional training data.
(a) Impact of temporal length on UCF101.
Temporal length Clip-level Video-level
16-frame
77.2
77.6
32-frame
77.3
77.2
64-frame
79.7
80.1
Adaptive (Ours)
82.8
83.0
(b) Impact of additional training data.
Method
Accuracy
From scratch UCF
80.2
Fine-tuning from HMDB
83.7
From scratch HMDB
56.4
Fine-tuning from UCF
62.5
Model Analysis. In the following we investigate the optimal configuration of our DANN. There are
two crucial settings for DANN model. The first one is the AL deployment including its order and
number. The other one is the unfolding time T in the recurrent layers. Table 3 shows the details of
performance comparison, where VC is the standard volumetric convolutional layer and B_6VC_3FC
is a baseline composed of similar configurations with DANN but without ALs and adaptive input
size choice. The first column of Table 3(a) only has one AL layer and the accuracy comparison
demonstrates the benefits of exploring contexts as early as possible. The right column of Table 3(a)
shows the performance gains as the number of AL increases, which verifies the advantages of the
inserted recurrent layer. Table 3(b) uses 6AL_VPP_3FC to study the impact of T and the results
prove that larger T leads to better performance. This is perhaps due to larger contexts embedded into
DANN which are more suitable to capture semantic information.
Table 3: Performance comparison with different configurations of DANN on UCF101 split 1.
(a) Impact of the order and the number of AL using T = 3.
Architecture
Acc. Architecture
Acc.
B_6VC_3FC
80.2 2AL_4VC_VPP_3FC 85.9
AL_5VC_VPP_3FC
85.1 3AL_3VC_VPP_3FC 86.7
VC_AL_4VC_VPP_3FC
83.3 4AL_2VC_VPP_3FC 86.4
2VC_AL_3VC_VPP_3FC 82.4 5AL_VC_VPP_3FC
87.5
3VC_AL_2VC_VPP_3FC 82.7 6AL_VPP_3FC
87.9
4VC_AL_VC_VPP_3FC
81.4
5VC_AL_VPP_3FC
80.9
(b) Impact of T .
Architecture
6AL_VPP_3FC, T = 3
6AL_VPP_3FC, T = 4
6AL_VPP_3FC, T = 5
6AL_VPP_3FC, T = 6
Acc.
87.9
88.5
88.3
89.0
Combining spatial stream. Recent work [29] demonstrates that combining appearance information
learned from spatial stream can improve the performance of pure 3D CNN. We examine this issue and
train a network with static RGB frames similar to [22] by inputting 256 ? 256 frames and cropping
them randomly into 224 ? 224 regions. The VGG-16 [23] network pre-trained on ImageNet is
fine-tuned on UCF101 and HMDB51 separately. Following good practice in [32], we apply weighted
averaging of 0.4 and 0.6 for RGB and DANN scores, respectively. Table 4 reports the final results of
our best model and its fusion with spatial stream on the three splits of both datasets.
6
Comparison with the state-of-the-art. Table 4 reports the best DANN model and state-of-the-art
approaches over three splits on UCF101 and HMDB51 datasets in terms of video-level accuracy.
As can be seen from Table 4, trajectory-based features are still competitive in the area of deep
learning, especially with the help of high-order encodings or deep architectures. Fusion strategies
often outperform pure single deep networks. Note that all the other deep networks use a pre-defined
temporal length to generate video clip as input such as 16-frame [28] and 60-frame [29], while our
DANN determines it in an adaptive manner. Combined with spatial stream, DANN achieves the
accuracy of 65.9% and 91.6% on HMDB51 and UCF101, separately.
Table 4: Comparison with the state-of-the-art on HMDB51 and UCF101 (over three splits).
Method
Slow fusion [13]
C3D [28]
Two-Stream(spatial) [22]
Two-Stream(temporal) [22]
CNN
LTC [29]
Very deep (temporal) [32]
Very deep (spatial) [32]
IDT+FV [30]
IDT+HSV [19]
Hand
IDT+MIFS [16]
IDT+SFV [20]
4.3
HMDB
40.5
54.6
57.9
57.2
61.1
65.1
66.8
UCF
65.4
85.2
73.0
83.7
83.3
87.0
87.0
85.9
87.9
89.1
-
Method
Two-stream [22]
CNN+deep LSTM [35]
TDD [31]
TDD+iDT [31]
Fusion
C3D+iDT [28]
Very deep (two-stream) [32]
LTC+spatial
DANN
Ours
DANN+spatial
HMDB
59.4
63.2
65.9
61.5
63.3
65.9
UCF
88.0
88.6
90.3
91.5
90.4
91.4
88.6
89.2
91.6
Qualitative Analysis
We present qualitative analysis of DANN and investigate where mistakes exist. We examine the perclass accuracies are computed and the difference between 6AL_VPP_3FC(T = 6) and B_6VC_3FC.
The class with the largest improvement when 6AL_VPP_3FC(T = 6) is used instead of B_6VC_3FC
is ?bowling?. This action is composed of preparing for a few seconds and then throwing a bowl. The
adaptive temporal choice determined by DANN can aggregate more reasonable semantic structures
while B_6VC_3FC has to choose temporal size manually. Figure 6 illustrates sample frames
from class ?bowling?. It is clear that DANN is more likely to leverage reasonable video clips
as network input. On the other hand, there are also a few classes that B_6VC_3FC outperforms
6AL_VPP_3FC(T = 6) such as ?haircut?. We also illustrate its sample frames in Figure 6. We
speculate this phenomenon is partly due to the rich contexts provided by 6AL_VPP_3FC(T = 6) are
not fit to simple actions performed in simple background.
Figure 6: Sample frames of ?bowling? and ?haircut?. For ?bowling? 6AL_VPP_3FC(T = 6)
segments video clips with 52 frames (Figure 2) which preserves temporal semantic structures. Such
adaptive choice performs worse than baseline for ?haircut?, where background and action are simple.
5
Conclusion and Future Work
This paper introduces a deep alternative neural network (DANN) for action recognition. DANN
stacks alternative layers which consists of a volumetric convolutional layer and a recurrent layer.
To preserve motion structures in temporal domain, we present an adaptive method to determine the
temporal size of network input and develop a volumetric pyramid pooling layer to resize the output
before fully connected layers into fixed-size vector. We demonstrate the advantages of DANN on
HMDB51 and UCF101 benchmarks and report competitive or superior results to the state-of-the-art.
7
There still remains some potential area of improvement. The most prominent one is the input size.
Although in our model we use adaptive temporal length, the spatial size is still chosen in ad hoc
manner. A more compact input such as action tube [7] of arbitrary size will be studied in the future,
which only contains key actor and spatiotemporal movement regions.
Acknowledgements. The work was supported by Shenzhen Peacock Plan (20130408-183003656).
References
[1] Thomas Brox, Andr?s Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical
flow estimation based on a theory for warping. In ECCV, pages 25?36. 2004.
[2] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment
for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[3] Peter Dayan and Laurence F Abbott. Theoretical neuroscience, volume 806. Cambridge, MA:
MIT Press, 2001.
[4] Gustavo Deco and Tai Sing Lee. The role of early visual cortex in visual integration: a neural
model of recurrent interaction. European Journal of Neuroscience, 20(4):1089?1100, 2004.
[5] Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini
Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks
for visual recognition and description. In ICCV, pages 2625?2634, 2015.
[6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jagannath Malik. Rich feature hierarchies for
accurate object detection and semantic segmentation. In CVPR, pages 580?587, 2014.
[7] Georgia Gkioxari and Jitendra Malik. Finding action tubes. In CVPR, pages 759?768, 2015.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep
convolutional networks for visual recognition. TPAMI, 37(9):1904?1916, 2015.
[9] David H Hubel and Torsten N Wiesel. Receptive fields of single neurones in the cat?s striate
cortex. The Journal of physiology, 148(3):574?591, 1959.
[10] David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional
architecture in the cat?s visual cortex. The Journal of physiology, 160(1):106?154, 1962.
[11] Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human
action recognition. TPAMI, 35(1):221?231, 2013.
[12] Vadim Kantorov and Ivan Laptev. Efficient feature extraction, encoding and classification for
action recognition. In CVPR, pages 2593?2600, 2014.
[13] Andrej Karpathy, George Toderici, Sachin Shetty, Tommy Leung, Rahul Sukthankar, and Li FeiFei. Large-scale video classification with convolutional neural networks. In CVPR, pages
1725?1732, 2014.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In NIPS, pages 1097?1105, 2012.
[15] Hildegard Kuehne, Hueihan Jhuang, Est?baliz Garrote, Tomaso Poggio, and Thomas Serre.
Hmdb: a large video database for human motion recognition. In ICCV, pages 2556?2563, 2011.
[16] Zhengzhong Lan, Ming Lin, Xuanchong Li, Alex G Hauptmann, and Bhiksha Raj. Beyond
gaussian pyramid: Multi-skip feature stacking for action recognition. In CVPR, pages 204?212,
2015.
[17] Ming Liang and Xiaolin Hu. Recurrent convolutional neural network for object recognition. In
CVPR, pages 3367?3375, 2015.
[18] Ming Liang, Xiaolin Hu, and Bo Zhang. Convolutional neural networks with intra-layer
recurrent connections for scene labeling. In NIPS, pages 937?945, 2015.
8
[19] Xiaojiang Peng, Limin Wang, Xingxing Wang, and Yu Qiao. Bag of visual words and fusion
methods for action recognition: Comprehensive study and good practice. arXiv preprint
arXiv:1405.4506, 2014.
[20] Xiaojiang Peng, Changqing Zou, Yu Qiao, and Qiang Peng. Action recognition with stacked
fisher vectors. In ECCV, pages 581?595. 2014.
[21] Pedro Pinheiro and Ronan Collobert. Recurrent convolutional neural networks for scene labeling.
In ICML, pages 82?90, 2014.
[22] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action
recognition in videos. In NIPS, pages 568?576, 2014.
[23] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[24] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human
actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
[25] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video
representations using lstms. In ICML, pages 843?852, 2015.
[26] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov,
Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.
In CVPR, pages 1?9, 2015.
[27] Graham W Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of
spatio-temporal features. In ECCV, pages 140?153. 2010.
[28] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning
spatiotemporal features with 3d convolutional networks. In ICCV, pages 4489?4497, 2015.
[29] G?l Varol, Ivan Laptev, and Cordelia Schmid. Long-term temporal convolutions for action
recognition. arXiv preprint arXiv:1604.04494, 2016.
[30] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV,
pages 3551?3558, 2013.
[31] Limin Wang, Yu Qiao, and Xiaoou Tang. Action recognition with trajectory-pooled deepconvolutional descriptors. In CVPR, pages 4305?4314, 2015.
[32] Limin Wang, Yuanjun Xiong, Zhe Wang, and Yu Qiao. Towards good practices for very deep
two-stream convnets. arXiv preprint arXiv:1507.02159, 2015.
[33] RL Waters and JM Morris. Electrical activity of muscles of the trunk during walking. Journal
of anatomy, 111(Pt 2):191, 1972.
[34] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of
the IEEE, 78(10):1550?1560, 1990.
[35] Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat
Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In
CVPR, pages 4694?4702, 2015.
[36] Christopher Zach, Thomas Pock, and Horst Bischof. A duality based approach for realtime tv-l
1 optical flow. In Pattern Recognition, pages 214?223. 2007.
[37] Matthew D Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional
neural networks. arXiv preprint arXiv:1301.3557, 2013.
9
| 6335 |@word torsten:2 cnn:7 version:1 wiesel:2 laurence:1 bptt:1 km:1 hu:2 rgb:4 sgd:1 initial:1 configuration:4 contains:4 fragment:4 score:5 liu:1 electronics:2 tuned:2 ours:2 outperforms:2 existing:1 current:6 comparing:1 guadarrama:1 anne:1 parsing:1 ronan:2 christian:1 designed:1 selected:1 amir:1 beginning:1 ith:2 short:3 contribute:1 hsv:1 zhang:2 qualitative:2 consists:3 prove:1 wild:1 manner:3 tommy:1 introduce:3 peng:3 inter:1 tomaso:1 elman:1 examine:3 paluri:1 multi:3 brain:2 behavior:1 inspired:2 ming:4 automatically:1 unfolded:1 toderici:2 jm:1 window:4 spain:1 provided:1 what:1 pursue:1 inputting:1 finding:1 temporal:32 quantitative:1 every:4 act:1 expands:2 k2:1 demonstrates:4 mansimov:1 unit:13 before:3 engineering:2 local:12 pock:1 limit:1 consequence:1 mistake:1 encoding:3 approximately:2 rnns:1 studied:1 collect:1 challenging:1 christoph:1 deployment:4 range:2 camera:1 lecun:1 testing:1 practice:3 backpropagation:2 procedure:4 area:2 universal:2 physiology:2 ucf101:18 sudheendra:1 pre:5 word:1 regular:1 cannot:1 andrej:1 context:24 influence:3 vijayanarasimhan:1 sukthankar:1 equivalent:1 map:6 demonstrated:2 center:1 peacock:1 gkioxari:1 go:1 pure:2 regarded:1 hierarchy:1 pt:1 us:1 overlapped:1 recognition:27 particularly:1 walking:1 werbos:1 database:1 xiaolin:2 inserted:1 role:1 preprint:6 wang:9 capture:1 electrical:1 zamir:1 region:7 connected:8 sun:1 decrease:1 movement:1 consumes:1 environment:1 mine:1 dynamic:2 trained:2 segment:1 laptev:2 learner:2 bowl:1 xiaoou:1 cat:2 train:1 stacked:1 roshan:1 effective:2 labeling:2 aggregate:4 neighborhood:2 larger:6 kai:1 cvpr:9 favor:2 simonyan:2 final:2 hoc:2 advantage:6 tpami:2 propose:1 ment:1 interaction:2 tran:1 varol:1 combining:2 description:1 sutskever:1 darrell:2 cropping:3 produce:1 mpeg:1 object:3 help:3 illustrate:2 recurrent:29 develop:3 andrew:3 bourdev:1 measured:1 ij:4 school:2 yuanjun:1 skip:1 anatomy:1 annotated:1 cnns:4 filter:1 subsequently:1 exploration:1 human:5 vx:1 enable:1 vc:1 stochastic:1 bin:2 organiser:1 require:1 manohar:1 bregler:1 exploring:2 insert:1 matthew:2 major:1 achieves:2 early:7 consecutive:1 adopt:2 salakhudinov:1 estimation:3 ruslan:1 bag:1 ross:1 largest:1 create:1 weighted:1 unfolding:3 mit:1 biglearn:1 gaussian:2 derived:1 focus:1 joachim:1 improvement:3 consistently:1 indicates:1 likelihood:1 contrast:1 baseline:4 dayan:1 epfl:1 leung:1 typically:1 torch:1 relation:1 uij:3 going:2 wij:1 i1:1 pixel:2 overall:1 classification:5 issue:2 denoted:2 plan:1 art:7 spatial:14 softmax:2 platform:1 brox:2 field:4 integration:1 extraction:1 cordelia:2 koray:1 manually:1 preparing:1 qiang:1 ng:1 yu:5 unsupervised:1 bruhn:1 icml:2 mimic:1 future:2 report:6 idt:6 torresani:1 few:2 wen:1 randomly:2 composed:2 preserve:6 recognize:1 comprehensive:1 hmdb51:11 papenberg:1 consisting:1 jeffrey:1 ltc:2 attempt:3 detection:1 interest:1 mining:1 investigate:5 intra:1 evaluation:3 introduces:1 lrn:2 accurate:1 poggio:1 tdd:2 hausknecht:1 conduct:2 pku:5 taylor:1 initialized:1 abundant:1 girshick:1 theoretical:1 column:2 rabinovich:1 clipping:5 stacking:2 hundred:1 krizhevsky:1 dependency:2 spatiotemporal:4 combined:2 density:2 explores:1 lstm:2 khurram:1 lee:1 pool:1 together:2 ilya:1 augmentation:7 deco:1 tube:3 containing:1 choose:2 watching:1 corner:1 worse:1 conf:1 leading:1 li:2 szegedy:1 potential:3 stride:2 speculate:2 summarized:2 wk:2 includes:2 pooled:1 kate:1 jitendra:1 dann:31 ad:2 stream:10 collobert:2 performed:4 competitive:5 start:2 jia:1 cxt:1 contribution:1 accuracy:8 convolutional:23 descriptor:1 characteristic:2 correspond:2 shenzhen:1 raw:1 vincent:1 kavukcuoglu:1 ren:1 venugopalan:1 trajectory:4 rectified:1 acc:3 synapsis:2 reach:1 influenced:1 manual:1 sharing:1 farabet:1 volumetric:26 trevor:2 energy:7 involved:2 static:1 outnumber:1 gain:3 dataset:4 popular:1 kuehne:1 segmentation:1 schedule:1 feed:7 higher:2 response:4 wei:2 rahul:1 zisserman:2 improved:1 box:1 stage:2 binocular:1 convnets:1 hand:2 horizontal:1 lstms:1 christopher:1 multiscale:1 quality:2 perhaps:1 bhiksha:1 serre:1 contain:1 evolution:5 former:1 hence:1 regularization:1 iteratively:1 semantic:9 illustrated:1 deal:2 during:3 bowling:4 criterion:1 prominent:1 demonstrate:4 argues:1 motion:7 performs:1 dragomir:1 image:4 wise:1 novel:1 superior:3 common:2 functional:1 empirically:1 ji:1 rl:1 volume:1 extend:2 discussed:1 he:1 anguelov:1 cambridge:1 tuning:3 ucf:4 cortex:4 actor:1 inhibition:1 sergio:1 recent:3 raj:1 arbitrarily:1 success:1 muscle:1 preserving:1 minimum:3 seen:2 additional:4 george:2 feifei:1 determine:6 xiangyu:1 signal:2 sliding:3 segmented:1 mifs:1 gesture:1 cross:1 long:5 lin:1 divided:1 peking:2 impact:9 crop:2 arxiv:12 iteration:3 kernel:10 normalization:1 monga:1 pyramid:14 background:2 fine:5 separately:2 interval:1 jian:1 crucial:2 modality:1 extra:1 vadim:1 pinheiro:1 probably:1 yue:1 pooling:19 tend:1 induced:2 sent:1 flow:15 call:1 extracting:2 leverage:2 yang:1 split:10 easy:1 ivan:2 relu:3 fit:1 architecture:14 topology:1 idea:1 cn:5 lubomir:1 vgg:1 torch7:1 padding:1 effort:1 soomro:1 peter:1 karen:2 shaoqing:1 neurones:1 action:29 matlab:1 deep:22 clear:1 karpathy:1 amount:1 neocortex:1 ten:1 morris:1 clip:23 sachin:1 reduced:1 generate:1 outperform:2 exist:2 vy:2 andr:1 shifted:1 estimated:1 neuroscience:2 anatomical:1 discrete:1 key:2 lan:1 yangqing:1 abbott:1 utilize:1 sum:1 compete:1 throughout:1 almost:1 reasonable:2 yann:1 utilizes:1 patch:1 realtime:1 garrote:1 resize:4 graham:1 dropout:1 layer:71 followed:5 weickert:1 activity:2 precisely:1 throwing:1 alex:2 bp:1 scene:3 aspect:1 nitish:1 span:1 optical:13 relatively:1 developing:1 tv:1 combination:3 describes:1 across:1 subhashini:1 evolves:1 rob:3 iccv:4 equation:1 remains:2 tai:1 discus:1 trunk:1 xyz:4 hei:1 flip:1 end:3 apply:4 observe:1 hierarchical:4 appropriate:2 generic:1 pierre:1 xiong:1 alternative:20 batch:2 shetty:1 shah:1 thomas:3 top:2 denotes:1 remaining:1 zeiler:1 exploit:1 especially:1 warping:1 malik:2 already:1 receptive:3 strategy:3 striate:1 exhibit:1 subnetwork:1 gradient:2 lateral:1 landmark:5 reason:1 water:1 marcus:1 besides:2 length:15 reed:1 illustration:1 ratio:2 mini:2 minimizing:1 sermanet:1 liang:2 difficult:1 unfortunately:1 negative:1 suppress:1 implementation:2 c3d:3 adjustable:1 convolution:7 datasets:4 benchmark:4 sing:1 snippet:1 hinton:1 frame:23 locate:1 stack:2 smoothed:1 arbitrary:10 intensity:1 david:2 namely:1 toolbox:1 extensive:1 connection:10 imagenet:2 bischof:1 fv:1 accepts:1 learned:3 qiao:4 barcelona:1 boost:2 nip:5 address:1 beyond:2 suggested:1 proceeds:1 usually:2 pattern:2 spp:1 hendricks:1 shuiwang:1 scott:1 rf:5 max:3 memory:1 video:37 including:2 suitable:2 overlap:1 improve:3 lorenzo:1 extract:2 schmid:2 acknowledgement:1 evolve:1 embedded:1 fully:9 loss:1 geoffrey:1 remarkable:1 vanhoucke:1 vectorized:1 viewpoint:1 heng:1 share:1 eccv:3 periodicity:1 jhuang:1 supported:1 last:4 keeping:1 jth:2 tvl1:4 bias:1 lisa:1 deeper:3 sparse:1 benefit:2 feedback:1 depth:2 dimension:2 rich:3 forward:9 author:1 adaptive:15 clue:1 horst:1 erhan:1 cope:1 approximate:1 compact:2 hubel:2 conclude:1 spatio:2 discriminative:1 fergus:3 zhe:1 table:15 learn:2 transfer:1 robust:1 du:1 cl:1 european:1 zou:1 domain:5 protocol:1 main:2 motivation:1 noise:1 paul:1 verifies:2 xu:1 georgia:1 slow:1 position:1 momentum:1 zach:1 breaking:2 late:1 mubarak:1 learns:1 bij:2 haircut:3 donahue:2 down:2 removing:1 embed:2 dumitru:1 tang:1 decay:1 evidence:2 derives:1 essential:2 burden:1 fusion:5 workshop:1 gustavo:1 joe:1 hauptmann:1 illustrates:2 chen:1 vpp:1 entropy:1 simply:1 explore:1 appearance:1 gao:1 hmdb:5 visual:10 likely:1 rohrbach:1 limin:3 vinyals:1 kaiming:1 bo:1 watch:1 pedro:1 determines:1 extracted:1 ma:1 towards:1 jeff:1 replace:1 shared:1 fisher:1 hard:1 change:2 typical:1 determined:2 specifically:3 except:1 averaging:2 zhengzhong:1 called:1 nil:1 ece:2 experimental:1 partly:1 duality:1 est:1 saenko:1 select:2 formally:1 latter:1 rajat:1 phenomenon:2 oriol:1 incorporate:1 evaluate:3 tested:1 scratch:4 srivastava:1 |
5,898 | 6,336 | Boosting with Abstention
Corinna Cortes
Google Research
New York, NY 10011
Giulia DeSalvo
Courant Institute
New York, NY 10012
Mehryar Mohri
Courant Institute and Google
New York, NY 10012
[email protected]
[email protected]
[email protected]
Abstract
We present a new boosting algorithm for the key scenario of binary classification
with abstention where the algorithm can abstain from predicting the label of a point,
at the price of a fixed cost. At each round, our algorithm selects a pair of functions,
a base predictor and a base abstention function. We define convex upper bounds
for the natural loss function associated to this problem, which we prove to be
calibrated with respect to the Bayes solution. Our algorithm benefits from general
margin-based learning guarantees which we derive for ensembles of pairs of base
predictor and abstention functions, in terms of the Rademacher complexities of the
corresponding function classes. We give convergence guarantees for our algorithm
along with a linear-time weak-learning algorithm for abstention stumps. We also
report the results of several experiments suggesting that our algorithm provides a
significant improvement in practice over two confidence-based algorithms.
1
Introduction
Classification with abstention is a key learning scenario where the algorithm can abstain from making
a prediction, at the price of incurring a fixed cost. This is the natural scenario in a variety of common
and important applications. An example is spoken-dialog applications where the system can redirect
a call to an operator to avoid the cost of incorrectly assigning a category to a spoken utterance and
misguiding the dialog manager. This requires the availability of an operator, which incurs a fixed and
predefined price. Other examples arise in the design of a search engine or an information extraction
system, where, rather than taking the risk of displaying an irrelevant document, the system can resort
to the help of a more sophisticated, but more time-consuming classifier. More generally, this learning
scenario arises in a wide range of applications including health, bioinformatics, astronomical event
detection, active learning, and many others, where abstention is an acceptable option with some cost.
Classification with abstention is thus a highly relevant problem.
The standard approach for tackling this problem is via confidence-based abstention: a real-valued
function h is learned for the classification problem and the points x for which its magnitude |h(x)| is
smaller than some threshold are rejected. Bartlett and Wegkamp [1] gave a theoretical analysis of
this approach based on consistency. They introduced a discontinuous loss function taking into account
the cost for rejection, upper-bounded that loss by a convex and continuous Double Hinge Loss (DHL)
surrogate, and derived an algorithm based on that convex surrogate loss. Their work inspired a series
of follow-up papers that developed both the theory and practice behind confidence-based abstention
[32, 15, 31]. Further related works can be found in Appendix A.
In this paper, we present a solution to the problem of classification with abstention that radically
departs from the confidence-based approach. We introduce a general model where a pair (h, r)
for a classifier h and rejection function r are learned simultaneously. Under this novel framework,
we present a Boosting-style algorithm with Abstention, BA, that learns accurately the classifier
and abstention functions. Note that the terminology of ?boosting with abstention? was used by
Schapire and Singer [26] to refer to a scenario where a base classifier is allowed to abstain, but
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
y
+-+-
+++++
---------
|h(x)| <
?
x
h(x) =
x+?
Figure 1: The best predictor h is defined by the threshold ?: h(x) = x + ?. For c < 12 , the
region defined by X ? ? should be rejected. But the corresponding abstention function r defined by
r(x) = x ? cannot be defined as |h(x)| ? for any > 0.
where the boosting algorithm itself has to commit to a prediction. This is therefore distinct from the
scenario of classification with abstention studied here. Nevertheless, we will introduce and examine
a confidence-based Two-Step Boosting algorithm, the TSB algorithm, that consists of first training
Adaboost and next searching for the best confidence-based abstention threshold.
The paper is organized as follows. Section 2 describes our general abstention model which consists
of learning a pair (h, r) simultaneously and compares it with confidence-based models. Section 3.2
presents a series of theoretical results for the problem of learning convex ensembles for classification
with abstention, including the introduction of calibrated convex surrogate losses and general datadependent learning guarantees. In Section 4, we use these learning bounds to design a regularized
boosting algorithm. We further prove the convergence of the algorithm and present a linear-time
weak-learning algorithm for a natural family of abstention stumps. Finally, in Section 5, we report
several experimental results comparing the BA algorithm with the DHL and the TSB algorithms.
2
Preliminaries
In this section, we first introduce a general model for learning with abstention [7] and then compare
it with confidence-based models.
2.1
General abstention model
We assume as in standard supervised learning that the training and test points are drawn i.i.d. according
to some fixed but unknown distribution D over X ? { 1, +1}. We consider the learning scenario of
binary classification with abstention. Given an instance x 2 X, the learner has the option of abstaining
from making a prediction for x at the price of incurring a non-negative loss c(x), or otherwise making
a prediction h(x) using a predictor h and incurring the standard zero-one loss 1yh(x)?0 where the
true label is y. Since a random guess achieves an expected cost of at most 12 , rejection only makes
sense for c(x) < 12 .
We will model the learner by a pair (h, r) where the function r : X ! R determines the points
x 2 X to be rejected according to r(x) ? 0 and where the hypothesis h : X ! R predicts labels for
non-rejected points via its sign. Extending the loss function considered in Bartlett and Wegkamp [1],
the abstention loss for a pair (h, r) is defined as as follows for any (x, y) 2 X ? { 1, +1}:
L(h, r, x, y) = 1yh(x)?0 1r(x)>0 + c(x)1r(x)?0 .
(1)
The abstention cost c(x) is assumed known to the learner. In the following, we assume that c is a
constant function, but part of our analysis is applicable to the more general case.
We denote by H and R two families of functions mapping X to R and we assume the labeled sample
S = ((x1 , y1 ), . . . , (xm , ym )) is drawn i.i.d. from Dm . The learning problem consists of determining
a pair (h, r) 2 H ? R that admits a small expected abstention loss R(h, r), defined as follows:
?
?
R(h, r) = E
1yh(x)?0 1r(x)>0 + c1r(x)?0 .
(2)
(x,y)?D
bS (h, r) =
Similarly,?we define the empirical loss?of a pair (h, r) 2 H ? R over the sample S by: R
E(x,y)?S 1yh(x)?0 1r(x)>0 + c1r(x)?0 , where (x, y) ? S indicates that (x, y) is drawn according
to the empirical distribution defined by S.
2.2
Confidence-based abstention model
Confidence-based models are a special case of the general model for learning with rejection presented
in Section 2.1 corresponding to the pair (h(x), r(x)) = (h(x), |h(x)|
), where is a parameter
2
that changes the threshold of rejection. This specific choice was based on consistency results
shown in [1]. In particular, the Bayes solution (h? , r? ) of the learning problem, that is where the
distribution D is known, is given by h? (x) = ?(x) 12 and r? (x) = |h? (x)| ( 12 c) where
?(x) = P[Y = +1|x] for any x 2 X, but note that this is not a unique solution. The form of h? (x)
follows by a similar reasoning as for the standard binary classification problem. It is straightforward
to see that the optimal rejection function r? is non-positive, meaning a point is rejected, if and only if
min{?(x), 1 ?(x)} c. Equivalently, the following holds: max{?(x) 12 , 12 ?(x)} ? 12 c if
and only if |?(x) 12 | ? 12 c and using the definition of h? , we recover the optimal r? . In light
of the Bayes solution, the specific choice of the abstention function r is natural; however, requiring
the abstention function r to be defined as r(x) = |h(x)|
, for some h 2 H, is in general too
restrictive when predictors are selected out of a limited subset H of all measurable functions over X.
Consider the example shown in Figure 1 where H is a family of linear functions. For this simple case,
the optimal abstention region cannot be attained as a function of the best predictor h while it can
be achieved by allowing to learn a pair (h, r). Thus, the general model for learning with abstention
analyzed in Section 2.1 is both more flexible and more general.
3
Theoretical analysis
This section presents a theoretical analysis of the problem of learning convex ensembles for classification with abstention. We first introduce general convex surrogate functions for the abstention loss and
prove a necessary and sufficient condition based on their parameters for them to be calibrated. Next
we define the ensemble family we consider and prove general data-dependent learning guarantees for
it based on the Rademacher complexities of the base predictor and base rejector sets.
3.1
Convex surrogates
We introduce two types of convex surrogate functions for the abstention loss.
Observe that the abstention loss L(h, r, x, y) can be equivalently expressed as L(h, r, x, y) =
max 1yh(x)?0 1 r(x)<0 , c 1r(x)?0 . In view of that, since for any f, g 2 R, max(f, g) =
f +g+|g f |
f +g
2
2 , the following inequalities hold for a > 0 and b > 0:
L(h, r, x, y) = max 1yh(x)?0 1
r(x)<0 , c 1r(x)?0
? max 1max(yh(x),
? max 1 yh(x)
r(x)
2
r(x))?0 , c 1r(x)?0
?0
, c 1r(x)?0
= max 1a [yh(x) r(x)]?0 , c1b r(x)?0
?
? max 1 a [r(x) yh(x)] , c 2
b r(x)
?
,
where u ! 1 ( u) and u ! 2 ( u) are two non-increasing convex functions upper-bounding
u ! 1u?0 over R. Let LMB be the convex surrogate defined by the last inequality above:
?
?
LMB (h, r, x, y) = max 1 a [r(x) yh(x)] , c 2 b r(x) ,
(3)
Since LMB is not differentiable everywhere, we upper-bound the convex surrogate LMB as follows:
max 1a [yh(x) r(x)]?0 , c 1b r(x)?0 ? 1 a [r(x) yh(x)] + c 2 b r(x) . Similarly, we let
LSB denote this convex surrogate:
LSB (h, r, x, y) = 1 a [r(x) yh(x)] + c 2 b r(x) .
(4)
Figure 2 shows the plots of the convex surrogates LMB and LSB as well as that of the abstention loss.
?
Let (h?L , rL
) denote the pair that attains the minimum of the expected loss Ex,y (LSB (h, r, x, y)) over
all measurable functions for 1 (u) = 2 (u) = exp(u). In Appendix F, we show that
? with
?
q ?(x) =
?
P(Y = +1|X = x), the pair (h?L , rL
) where h?L =
1
2a
log
?
1 ?
?
and rL
=
1
a +b
log
cb
2a
1
?(1 ?)
?
makes LSB a calibrated loss, meaning that the sign of the (h?L , rL
) that minimizes the expected
? ?
surrogate loss matches the sign of the Bayes classifier (h , r ). More precisely, the following holds.
Theorem 1 (Calibration of convex surrogate). For a > 0 and b > 0, the inf (h,r) E(x,y) [L(h, r, x, y)]
?
?
?
?
?
?
is attained
p at (hL , rL ) such that sign(h ) = sign(hL ) and sign(r ) = sign(rL ) if and only if
b /a = 2 (1 c)/c.
3
Figure 2: The left figure is a plot of the abstention loss. The middle figure is a plot of the surrogate
function LMB while the right figure is a plot of the surrogate loss LSB both for c = 0.45.
The theorem shows that the classification and rejection solution obtained by minimizing the surrogate
loss for that choice of (a, b) coincides with the one
pobtained using the original loss. In the following,
we make the explicit choice of a = 1 and b = 2 (1 c)/c for the loss LSB to be calibrated.
3.2
Learning guarantees for ensembles in classification with abstention
In the standard scenario of classification, it is often easy to come up with simple base classifiers that
may abstain. As an example, a simple rule could classify a message as spam based on the presence
of some word, as ham in the presence of some other word, and just abstain in the absence of both,
as in the boosting with abstention algorithm by Schapire and Singer [26]. Our objective is to learn
ensembles of such base hypotheses to create accurate solutions for classification with abstention.
Our ensemble functions are based on the framework described in Section 2.1. Let H and R be two
families of functions mapping X to [ 1, 1]. The ensemble family F that we consider is then the
convex hull of H ? R:
?? X
T
T
T
?
X
X
F=
? t ht ,
?t rt : T 1, ?t 0,
?t = 1, ht 2 H, rt 2 R .
(5)
t=1
t=1
t=1
Thus, (h, r) 2 F abstains on input x 2 X when r(x) ? 0 and predicts the label sign(h(x)) otherwise.
Let u ! 1 ( u) and u ! 2 ( u) be two strictly decreasing differentiable convex function upperbounding u ! 1u?0 over R. For calibration constants a , b > 0, and cost c > 0, we assume that there
exist u and v such that 1 (a u) < 1 and c 2 (v) < 1, otherwise the surrogate would not be useful.
Let 1 1 and 2 1 be the inverse functions, which always exist since 1 and 2 are strictly monotone.
We will use the following definitions: C 1 = 2a 01 1 1 (1) and C 2 = 2cb 02 2 1 (1/c) .
Observe that for 1 (u) = 2 (u) = exp(u), we simply have C 1 = 2a and C 2 = 2b .
Theorem 2. Let H and R be two families of functions mapping X to R. Assume N > 1. Then, for
any > 0, with probability at least 1
over the draw of a sample S of size m from D, the following
holds for all (h, r) 2 F:
r
log 1/
R(h, r) ? E [LMB (h, r, x, y)] + C 1 Rm (H) + (C 1 + C 2 )Rm (R) +
.
2m
(x,y)?S
The proof is given in Appendix C. The theorem gives effective learning guarantees for ensemble
pairs (h, r) 2 F when the base predictor and abstention functions admit favorable Rademacher
complexities. In earlier work [7], we present a learning bound for a different type of surrogate losses
which can also be extended to hold for ensembles.
Next, we derive margin-based guarantees in the case where 1 (u) = 2 (u) = exp(u). For any
? > 0, the margin-losses associated to LMB and LSB are denoted by L?MB and L?SB and defined for all
(h, r) 2 F and (x, y) 2 X ? { 1, +1} by
L?MB (h, r, x, y) = LMB (h/?, r/?, x, y) and L?SB (h, r, x, y) = LSB (h/?, r/?, x, y).
Theorem 2 applied to this margin-based loss results in the following corollary.
Corollary 3. Assume N > 1 and fix ? > 0. Then, for any > 0, with probability at least 1
over
the draw of an i.i.d. sample S of size m from D, the following holds for all f 2 F:
r
2a
2(a + b )
log 1/
?
R(h, r) ? E [LMB (h, r, x, y)] +
Rm (H) +
Rm (R) +
.
?
?
2m
(x,y)?S
4
BA(S = ((x1 , y1 ), . . . , (xm , ym )))
1 for i
1 to m do
1
1
2
D1 (i, 1)
2m ; D1 (i, 2)
2m
3 for t
1 to TPdo
Pm
m
4
Z1,t
i=1 Dt (i, 1); Z2,t
i=1 Dt (i, 2)
p
5
k
argminj2[1,N ] 2Z1,t ?t,j + Z1,t rj,1 2 c(1 c)Z2,t rj,2 . Direction
p
r
r
6
Z
Z1,t (?t,k + k,1
2 c(1 c)Z2,t k,2
2 )
2
m
?t 1,k
?t 1,k
7
if (Z1,t Z)e
Ze
< Zt then
8
?t
?t 1,k . Stepr
h
h
i2
i
Z
m
m
9
else ?t
log
+ Z1,t 1 . Step
2Zt Z +
2Zt Z
10
11
12
13
14
15
16
17
?t
rt
ht
?
+ ? t ek
PtN 1
? j rj
Pj=1
N
j=1 ?j hj
Pm
0
Zt+1
rt (xi )
i=1
for i
1 to m do
0
yi ht (xi ) +
rt (xi ) yi ht (xi )
Zt+1
Dt+1 (i, 1)
PN
(h, r)
j=1 ?T,j (hj , rj )
return (h, r)
0
q
2 1 c c rt (xi )
; Dt+1 (i, 2)
0
2
r
1 c
c rt (xi )
Zt+1
Figure 3: Pseudocode of the BA algorithm for both the exponential loss with 1 (u) = 2 (u) =
exp(u) as well as for the logistic loss with 1 (u) = 2 (u) = log2 (1 + eu ). The parameters include
the cost of rejection c and determining the strength of the the ?-constraint for the L1 regularization.
The definition of the weighted errors ?t,k as well as the expected rejections, rk,1 and rk,2 , are given
in Equation 7. For other surrogate losses, the step size ?t is found via a line search or other numerical
methods by solving argmin? F (?t 1 + ?ek ).
The bound of Corollary 3 applies similarly to L?SB since it is an upper bound
on L?MB . It can further
?q
?
log log 1/?
be shown to hold uniformly for all ? 2 (0, 1) at the price of a term in O
using standard
m
techniques [16, 22] (see Appendix C).
4
Boosting algorithm
Here, we derive a boosting-style algorithm (BA algorithm) for learning an ensemble with the option
of abstention for both losses LMB and LSB . Below, we describe the algorithm for LSB and refer the
reader to Appendix H for the version using the loss LMB .
4.1
Objective function
The BA algorithm solves a convex optimization problem that is based on Corollary 3 for loss
LSB . Since the last three terms of the right-hand side of the boundP
of the corollary do not dem
?
1
pend on ?, this suggests to select ? as the solution of min?2 m
i=1 LSB (h, r, xi , yi ). Via
a change of variable ?
?/? that does not affect the optimization problem, we can equivPm
PT
1
alently search for min? 0 m
i=1 LSB (h, r, xi , yi ) such that
t=1 ?t ? 1/?. Introducing the
PT
Lagrange variable associated to the constraint t=1 ?t ? 1/?, the problem can rewritten as:
Pm
PT
1
min? 0 m
i=1 LSB (h, r, xi , yi ) +
t=1 ?t . Letting {(h1 , r1 ), . . . , (hN , rN )} be the set of base
functions pairs for the classifier and rejection function, we can rewrite the optimization problem as
5
the minimization over ? 0 of
m
N
N
N
N
?
? X
?
X
X
1 X ?X
?j rj (xi ) yi
?j hj (xi ) +c
b
?j rj (xi ) +
?j .
m i=1 j=1
j=1
j=1
j=1
Thus, the following is the objective function of our optimization problem:
m
F (?) =
4.2
1 X
m i=1
rt (xi )
yi ht (xi ) + c
b rt (xi ) +
N
X
?j .
(6)
j=1
Projected coordinate descent
The problem min? 0 F (?) is a convex optimization problem, which we solve via projected
coordinate descent. Let ek be the kth unit vector in RN and let F 0 (?, ej ) be the directional
derivative of F along the direction ej at ?. The algorithm consists of the following three
steps. First, it determines the direction of maximal descent by k = argmaxj2[1,N ] |F 0 (?t 1 , ej )|.
Second, it calculates the best step ? along the direction that preserves non-negativity of ? by
? = argmin?t 1 +?ek 0 F (?t 1 + ?ek ). Third, it updates ?t 1 to ?t = ?t 1 + ?ek .
The pseudocode of the BA algorithm is given in Figure 3. The step and direction are based on
F 0 (?t 1 , ej ). For any t 2 [1, T ], define a distribution Dt over the pairs (i, n), with n in {1, 2}
Dt (i, 1) =
0
rt
1 (xi )
yi h t
1 (xi )
and Dt (i, 2) =
Zt
0
b rt 1 (xi )
,
Zt
Pm
0
where Zt is the normalization factor given by Zt =
rt 1 (xi ) yi ht 1 (xi ) +
i=1
0
b rt 1 (xi ) . In order to derive an explicit formulation of the descent direction that is based
on the weighted error of the classification function hj and the expected value of the rejection function rj , we
D2,t defined by Dt (i, 1)/Z1,t and Dt (i, 1)/Z2,t where
Pmuse the distributions D1,tPand
m
Z1,t =
i=1 Dt (i, 1) and Z2,t =
i=1 Dt (i, 2) are the normalization factors. Now, for any
j 2 [1, N ] and s 2 [1, T ], we can define the weighted error ?t,j and the expected value of the
rejection function, rj,1 and rj,2 , over distribution D1,t and D2,t as follows:
h
i
?t,j = 12 1
E [yi hj (xi )] , rj,1 = E [rj (xi )], and rj,2 = E [rj (xi )].
(7)
i?D1,t
i?D1,t
i?D2,t
Using these definition, we show (see Appendix D) that the descent direction is given by
p
k = argmin 2Z1,t ?t,j + Z1,t rj,1 2 c(1 c)Z2,t rj,2 .
j2[1,N ]
p
This equation shows that Z1,t and 2 c(1 c)Z2,t re-scale the weighted error and expected rejection.
Thus, finding the best descent direction by minimizing this equation is equivalent to finding the best
scaled trade-off between the misclassification error and the average rejection cost. The step size can
in general be found via line search or other numerical methods, but we have derived a closed-form
solution of the step size for both the exponential and logistic loss (see Appendix D.2). Further details
of the derivation of projected coordinate descent on F are also given in Appendix D.
Note that for rt ! 0+ in Equation 6, that is when the rejection terms are dropped in the objective, we
retrieve the L1-regularized Adaboost. As for Adaboost, we can define a weak learning assumption
which requires that the directional derivative along p
at least one base pair be non-zero. For = 0, it
2
c(1 c)Z
2,t
does not hold when for all j: 2?s,j 1 = rj,1 +
rj,2 , which corresponds to a balance
Z1,t
between the edge and rejection costs for all j. Observe that in the particular case when the rejection
functions are zero, it coincides with the standard weak learning assumption for Adaboost (?s,j = 12
for all j).
The following theorem provides the convergence of the projected coordinate descent algorithm for
our objective function, F (?). The proof is given in Appendix E.
Theorem 4. Assume that is twice differentiable and that 00 (u) > 0 for all u 2 R. Then, the
projected coordinate descent algorithm applied to F converges to the solution ?? of the optimization
problem max? 0 F (?). If additionally is strongly convex over the path of the iterates ?t then
there exists ? > 0 and ? > 0 such that for all t > ? , F (?t+1 ) F (?? ) ? 1 ?1 F (?t ) F (?? ) .
6
1 R
2
+
1
+
2
R
Figure 4: Illustration of the abstention stumps on a variable X.
Specifically, this theorem holds for the exponential loss (u) = exp(u) and the logistic loss
( u) = log2 (1 + e u ) since they are strongly convex over the compact set containing the ?t s.
4.3
Abstention stumps
We first define a family of base hypotheses, abstention stumps, that can be viewed as extensions of the
standard boosting stumps to the setting of classification with abstention. An abstention stump h?1 ,?2
over the feature X is defined by two thresholds ?1 , ?2 2 R with ?1 ? ?2 . There are 6 different such
stumps, Figure 4 illustrates two of them. For the left figure, points with variables X less than or equal
to ?1 are labeled negatively, those with X ?2 are labeled positively, and those with X between ?1
and ?2 are rejected. In general, an abstention stump is defined by the pair h?1 ,?2 (X), r?1 ,?2 (X)
where, for Figure 4-left, h?1 ,?2 (X) = 1X??1 + 1X>?2 and r?1 ,?2 (X) = 1?1 <X??2 .
Thus, our abstention stumps are pairs (h, r?) with h taking values in { 1, 0, 1} and r? in {0, 1}, and
such that for any x either h(x) or r?(x) is zero. For our formulation and algorithm, these stumps can
be used in combination with any > 0, to define a family of base predictor and base rejector pairs of
the form (h(x),
r?(x)). Since ?t is non-negative, the value is needed to correct for over-rejection
by previously selected abstention stumps. The can be automatically learned by adding to the set
of base pairs the constant functionsP
(h0 , r0 ) = (0,
P 1). An ensemble solution returned by the BA
algorithm is therefore of the form
?
h
(x),
t t t
t ?t rt (x) where ?t s are the weights assigned to
each base pair.
Now, consider a sample of m points sorted by the value of X, which we denote by X1 ? ? ? ? ? Xm .
For abstention stumps, the derivative of the objective, F , can be further simplified (see Appendix G)
such that the problem can be reduced to finding an abstention stump with the minimal expected
abstention loss l(?1 , ?2 ), that is
m
X
argmin 2Dt (i, 1)[1yi =+1 1Xi ??1 + 1yi =
?1 ,?2 i=1
1 1Xi >?2 ]
+ 2Dt (i, 1)
cb Dt (i, 2) 1?1 <Xi ??2 .
Notice that given m points, at most (m + 1) thresholds need to be considered for ?1 and ?2 . Hence, a
straightforward algorithm inspects all possible O(m2 ) pairs (?1 , ?2 ) with ?1 ? ?2 in time O(m2 ).
However, Lemma 5 below and further derivations in Appendix G, allows for an O(m)-time algorithm
for finding optimal abstention stumps when the problem is solved without the constraint ?1 ? ?2 .
Note that while we state the lemma for the abstention stump in Figure 4-left, similar results hold for
any of the 6 types of stumps.
Lemma 5. The optimization problem without the constraint (?1 < ?2 ) can be decomposed as
follows:
m
X
argmin l(?1 , ?2 ) = argmin
2Dt (i, 1)1yi =+1 1Xi ??1 + 2Dt (i, 1) cb Dt (i, 2) 1?1 <Xi (8)
?1 ,?2
?1
i=1
m
X
+ argmin
?2
2Dt (i, 1)1yi =
1 1Xi >?2
i=1
+ 2Dt (i, 1)
cb Dt (i, 2) 1Xi ??2 . (9)
The optimization Problems (8) and (9) can be solved in linear time, via a method similar to that
of finding the optimal threshold for a standard zero-one loss boosting stump. When the condition
?1 < ?2 does not hold, we can simply revert to finding the minimum of l(?1 , ?2 ) in the naive way. In
practice, we find most often that the optimal solution of Problem 8 and Problem 9 satisfies ?1 < ?2 .
5
Experiments
In this section, we present the results of experiments with our abstention stump BA algorithm based
on LSB for several datasets. We compare the BA algorithm with the DHL algorithm [1], as well as a
7
cod
0.15
0.075
0.15
0.25
0.35
0.375
0.25
0.125
0
0.05
0.45
0.15
Cost
0.04
0.25
Cost
0.15
0.35
0.45
0.2
0.225
0.15
0.075
0
0.05
0.15
0.25
0.25
0.35
0.45
0.35
0.45
Cost
Rejection Loss
Rejection Loss
Rejection Loss
0.08
0.15
0.15
0.075
0
0.05
0.45
haberman
0.3
0.12
0
0.05
0.35
0.225
Cost
banknote
0.16
0.25
skin
0.3
Rejection Loss
0.225
0
0.05
pima
0.5
Rejection Loss
Rejection Loss
0.3
0.35
0.45
australian
0.15
0.1
0.05
0
0.05
Cost
0.15
0.25
Cost
Figure 5: Average rejection loss on the test set as a function of the abstention cost c for the TSB
Algorithm (in orange), the DHL Algorithm (in red) and the BA Algorithm (in blue) based on LSB .
confidence-based boosting algorithm TSB. Both of these algorithms are described in further detail
in Appendix B. We tested the algorithms on six data sets from UCI?s data repository, specifically
australian, cod, skin, banknote, haberman, and pima. For more information about the data sets,
see Appendix I. For each data set, we implemented the standard 5-fold cross-validation where we
randomly divided the data into training, validation and test set with the ratio 3:1:1. Using a different
random partition, we repeated the experiments five times. For all three algorithms, the cost values
ranged over c 2 {0.05, 0.1, . . . , 0.5} while threshold ranged over 2 {0.08, 0.16, . . . , 0.96}. For
the BA algorithm, the regularization parameter ranged over 2 {0, 0.05, . . . , 0.95}. All experiments for BA were based on T = 200 boosting rounds. The DHL algorithm used polynomial kernels
with degree d 2 {1, 2, 3} and it was implemented in CVX [8]. For each cost c, the hyperparameter
configuration was chosen to be the set of parameters that attained the smallest average rejection loss
on the validation set. For that set of parameters we report the results on the test set.
We first compared the confidence-based TSB algorithm with the BA and DHL algorithms (first row
of Figure 5). The experiments show that, while TSB can sometimes perform better than DHL, in
a number of cases its performance is dramatically worse as a function of c and, in all cases it is
outperformed by BA. In Appendix J, we give the full set of results for the TSB algorithm.
In view of that, our next series of results focus on the BA and DHL algorithms, directly designed to
optimize the rejection loss, for 3 other datasets (second row of Figure 5). Overall, the figures show
that BA outperforms the state-of-the-art DHL algorithm for most values of c, thereby indicating that
BA yields a significant improvement in practice. We have also successfully run BA on the CIFAR-10
data set (boat and horse images) which contains 10,000 instances and we believe that our algorithm
can scale to much larger datasets. In contrast, training DHL on such larger samples did not terminate
as it is based on a costly QCQP. In Appendix J, we present tables that report the average and standard
deviation of the abstention loss as well as the fraction of rejected points and the classification error
on non-rejected points.
6
Conclusion
We introduced a general framework for classification with abstention where the predictor and
abstention functions are learned simultaneously. We gave a detailed study of ensemble learning
within this framework including: new surrogate loss functions proven to be calibrated, Rademacher
complexity margin bounds for ensemble learning of the pair of predictor and abstention functions,
a new boosting-style algorithm, the analysis of a natural family of base predictor and abstention
functions, and the results of several experiments showing that BA algorithm yield a significant
improvement over the confidence-based algorithms DHL and TSB. Our algorithm can be further
extended by considering more complex base pairs such as more general ternary decision trees with
rejection leaves. Moreover, our theory and algorithm can be generalized to the scenario of multi-class
classification with abstention, which we have already initiated.
Acknowledgments
This work was partly funded by NSF CCF-1535987 and IIS-1618662.
8
References
[1] P. Bartlett and M. Wegkamp. Classification with a reject option using a hinge loss. JMLR, 2008.
[2] A. Bounsiar, E. Grall, and P. Beauseroy. Kernel based rejection method for supervised classification. In
WASET, 2007.
[3] H. L. Capitaine and C. Frelicot. An optimum class-rejective decision rule and its evaluation. In ICPR,
2010.
[4] K. Chaudhuri and C. Zhang. Beyond disagreement-based agnostic active learning. In NIPS, 2014.
[5] C. Chow. An optimum character recognition system using decision function. IEEE Trans. Comput., 1957.
[6] C. Chow. On optimum recognition error and reject trade-off. IEEE Trans. Comput., 1970.
[7] C. Cortes, G. DeSalvo, and M. Mohri. Learning with rejection. In ALT, 2016.
[8] I. CVX Research. CVX: Matlab software for disciplined convex programming, version 2.0, Aug. 2012.
[9] B. Dubuisson and M. Masson. Statistical decision rule with incomplete knowledge about classes. In PR,
1993.
[10] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. JMLR, 2010.
[11] R. El-Yaniv and Y. Wiener. Agnostic selective classification. In NIPS, 2011.
[12] C. Elkan. The foundations of cost-sensitive learning. In IJCAI, 2001.
[13] G. Fumera and F. Roli. Support vector machines with embedded reject option. In ICPR, 2002.
[14] G. Fumera, F. Roli, and G. Giacinto. Multiple reject thresholds for improving classification reliability. In
ICAPR, 2000.
[15] Y. Grandvalet, J. Keshet, A. Rakotomamonjy, and S. Canu. Suppport vector machines with a reject option.
In NIPS, 2008.
[16] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of
combined classifiers. Annals of Statistics, 30, 2002.
[17] T. Landgrebe, D. Tax, P. Paclik, and R. Duin. The interaction between classification and reject performance
for distance-based reject-option classifiers. PRL, 2005.
[18] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, New
York, 1991.
[19] M. Littman, L. Li, and T. Walsh. Knows what it knows: A framework for self-aware learning. In ICML,
2008.
[20] Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable
minimization. Journal of Optimization Theory and Applications, 1992.
[21] I. Melvin, J. Weston, C. S. Leslie, and W. S. Noble. Combining classifiers for improved classification of
proteins from sequence or structure. BMC Bioinformatics, 2008.
[22] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012.
[23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in python. In JMLR, 2011.
[24] T. Pietraszek. Optimizing abstaining classifiers using ROC analysis. In ICML, 2005.
[25] C. Santos-Pereira and A. Pires. On optimal reject rules and ROC curves. PRL, 2005.
[26] R. E. Schapire and Y. Singer. Boostexter: A boosting-based system for text categorization. Machine
Learning, 39(2-3):135?168, 2000.
[27] D. Tax and R. Duin. Growing a multi-class classifier with a reject option. In Pattern Recognition Letters,
2008.
[28] F. Tortorella. An optimal reject rule for binary classifiers. In ICAPR, 2001.
[29] K. Trapeznikov and V. Saligrama. Supervised sequential classification under budget constraints. In
AISTATS, 2013.
[30] J. Wang, K. Trapeznikov, and V. Saligrama. An lp for sequential learning under budgets. In JMLR, 2014.
[31] M. Yuan and M. Wegkamp. Classification methods with reject option based on convex risk minimizations.
In JMLR, 2010.
[32] M. Yuang and M. Wegkamp. Support vector machines with a reject option. In Bernoulli, 2011.
[33] C. Zhang and K. Chaudhuri. The extended littlestone?s dimension for learning with mistakes and abstentions.
In COLT, 2016.
9
| 6336 |@word repository:1 version:2 middle:1 polynomial:1 dubuisson:1 d2:3 incurs:1 thereby:1 configuration:1 series:3 contains:1 document:1 dubourg:1 outperforms:1 com:1 comparing:1 z2:7 luo:1 assigning:1 tackling:1 numerical:2 partition:1 plot:4 designed:1 update:1 selected:2 guess:1 leaf:1 provides:2 boosting:16 iterates:1 zhang:2 five:1 melvin:1 along:4 yuan:1 prove:4 consists:4 introduce:5 blondel:1 expected:9 dialog:2 examine:1 multi:2 manager:1 growing:1 inspired:1 decreasing:1 decomposed:1 automatically:1 haberman:2 increasing:1 considering:1 spain:1 bounded:1 moreover:1 agnostic:2 what:1 santos:1 argmin:7 minimizes:1 developed:1 spoken:2 finding:6 guarantee:7 classifier:13 rm:4 scaled:1 unit:1 positive:1 dropped:1 mistake:1 initiated:1 path:1 twice:1 koltchinskii:1 studied:1 suggests:1 limited:1 walsh:1 range:1 unique:1 acknowledgment:1 ternary:1 practice:4 empirical:3 reject:12 confidence:13 word:2 protein:1 cannot:2 operator:2 risk:2 optimize:1 measurable:2 equivalent:1 straightforward:2 masson:1 convex:24 m2:2 rule:5 d1:6 retrieve:1 searching:1 coordinate:6 annals:1 pt:3 programming:1 hypothesis:3 elkan:1 ze:1 recognition:3 predicts:2 labeled:3 solved:2 wang:1 region:2 eu:1 trade:2 ham:1 panchenko:1 complexity:4 littman:1 solving:1 rewrite:1 passos:1 negatively:1 learner:3 derivation:2 revert:1 distinct:1 effective:1 describe:1 cod:2 horse:1 h0:1 larger:2 valued:1 solve:1 otherwise:3 statistic:1 commit:1 itself:1 sequence:1 differentiable:4 ledoux:1 interaction:1 mb:3 maximal:1 saligrama:2 j2:1 relevant:1 uci:1 combining:1 chaudhuri:2 tax:2 boostexter:1 rejector:2 convergence:4 double:1 lmb:12 extending:1 rademacher:4 r1:1 optimum:3 yaniv:2 converges:1 categorization:1 help:1 derive:4 aug:1 solves:1 implemented:2 come:1 australian:2 giacinto:1 direction:8 discontinuous:1 correct:1 hull:1 abstains:1 fix:1 generalization:1 preliminary:1 varoquaux:1 strictly:2 extension:1 hold:11 considered:2 trapeznikov:2 capitaine:1 exp:5 cb:5 mapping:3 achieves:1 smallest:1 favorable:1 outperformed:1 applicable:1 label:4 prettenhofer:1 sensitive:1 create:1 successfully:1 weighted:4 minimization:3 mit:1 always:1 rather:1 avoid:1 hj:5 pn:1 ej:4 corollary:5 derived:2 focus:1 improvement:3 bernoulli:1 indicates:1 grisel:1 contrast:1 c1b:1 attains:1 rostamizadeh:1 sense:1 talwalkar:1 dependent:1 el:2 sb:3 chow:2 selective:2 selects:1 overall:1 classification:28 flexible:1 colt:1 denoted:1 art:1 special:1 orange:1 gramfort:1 equal:1 aware:1 extraction:1 bmc:1 icml:2 noble:1 report:4 others:1 randomly:1 simultaneously:3 preserve:1 detection:1 message:1 highly:1 cournapeau:1 evaluation:1 analyzed:1 light:1 behind:1 predefined:1 accurate:1 edge:1 necessary:1 tree:1 incomplete:1 littlestone:1 re:1 theoretical:4 minimal:1 instance:2 classify:1 earlier:1 leslie:1 cost:21 introducing:1 deviation:1 subset:1 rakotomamonjy:1 predictor:12 too:1 calibrated:6 combined:1 off:2 wegkamp:5 ym:2 containing:1 hn:1 worse:1 admit:1 resort:1 ek:6 style:3 return:1 derivative:3 li:1 michel:1 suggesting:1 account:1 stump:19 availability:1 dem:1 pend:1 view:2 h1:1 closed:1 red:1 bayes:4 option:10 recover:1 wiener:2 ensemble:14 yield:2 upperbounding:1 directional:2 weak:4 accurately:1 definition:4 dm:1 associated:3 proof:2 astronomical:1 knowledge:1 organized:1 sophisticated:1 attained:3 courant:2 supervised:3 follow:1 adaboost:4 dt:20 disciplined:1 improved:1 wei:1 formulation:2 strongly:2 rejected:8 just:1 talagrand:1 hand:1 scikit:1 google:3 logistic:3 believe:1 requiring:1 true:1 ranged:3 ccf:1 regularization:2 assigned:1 hence:1 i2:1 round:2 self:1 coincides:2 generalized:1 l1:2 reasoning:1 meaning:2 image:1 abstain:5 novel:1 common:1 pseudocode:2 rl:6 banach:1 significant:3 refer:2 consistency:2 pm:4 similarly:3 canu:1 funded:1 reliability:1 calibration:2 base:18 optimizing:1 irrelevant:1 inf:1 scenario:9 tortorella:1 inequality:2 binary:4 abstention:64 yi:14 minimum:2 r0:1 ii:1 full:1 multiple:1 rj:17 match:1 cross:1 cifar:1 divided:1 calculates:1 prediction:4 normalization:2 kernel:2 sometimes:1 achieved:1 lsb:17 else:1 call:1 presence:2 prl:2 easy:1 variety:1 affect:1 gave:2 six:1 bartlett:3 returned:1 york:4 matlab:1 dramatically:1 generally:1 useful:1 detailed:1 category:1 reduced:1 schapire:3 exist:2 nsf:1 notice:1 sign:8 blue:1 hyperparameter:1 brucher:1 key:2 terminology:1 threshold:9 nevertheless:1 drawn:3 pj:1 abstaining:2 ht:7 monotone:1 fraction:1 run:1 inverse:1 everywhere:1 letter:1 family:10 reader:1 cvx:3 draw:2 decision:4 acceptable:1 appendix:15 bound:7 fold:1 strength:1 duin:2 precisely:1 constraint:5 software:1 qcqp:1 min:5 ptn:1 according:3 icpr:2 combination:1 smaller:1 describes:1 character:1 lp:1 making:3 b:1 hl:2 pr:1 equation:4 tsb:8 previously:1 thirion:1 singer:3 needed:1 letting:1 know:2 incurring:3 rewritten:1 observe:3 disagreement:1 icapr:2 corinna:2 original:1 include:1 log2:2 hinge:2 restrictive:1 objective:6 desalvo:3 skin:2 already:1 perrot:1 costly:1 rt:15 surrogate:19 kth:1 distance:1 tseng:1 illustration:1 ratio:1 minimizing:2 balance:1 equivalently:2 pima:2 negative:2 ba:20 design:2 zt:10 unknown:1 perform:1 allowing:1 upper:5 datasets:3 descent:10 incorrectly:1 extended:3 y1:2 rn:2 banknote:2 introduced:2 pair:24 vanderplas:1 z1:12 engine:1 rejective:1 learned:4 pires:1 barcelona:1 nip:4 trans:2 beyond:1 below:2 pattern:1 xm:3 dhl:11 including:3 max:12 ijcai:1 event:1 misclassification:1 natural:5 regularized:2 predicting:1 boat:1 isoperimetry:1 cim:2 redirect:1 negativity:1 naive:1 utterance:1 health:1 text:1 python:1 determining:2 embedded:1 loss:50 inspects:1 proven:1 validation:3 foundation:3 degree:1 sufficient:1 displaying:1 grandvalet:1 row:2 roli:2 mohri:4 last:2 free:1 side:1 institute:2 wide:1 taking:3 benefit:1 curve:1 dimension:1 landgrebe:1 projected:5 simplified:1 spam:1 compact:1 active:2 assumed:1 consuming:1 xi:31 fumera:2 search:4 continuous:1 table:1 additionally:1 learn:3 terminate:1 improving:1 mehryar:1 complex:1 did:1 aistats:1 bounding:2 noise:1 arise:1 allowed:1 repeated:1 x1:3 positively:1 roc:2 ny:3 pmuse:1 duchesnay:1 pereira:1 explicit:2 exponential:3 comput:2 jmlr:5 yh:14 third:1 learns:1 theorem:8 departs:1 rk:2 specific:2 showing:1 nyu:2 cortes:2 admits:1 alt:1 exists:1 giulia:1 adding:1 sequential:2 keshet:1 magnitude:1 illustrates:1 budget:2 margin:6 rejection:30 simply:2 lagrange:1 expressed:1 datadependent:1 paclik:1 applies:1 springer:1 radically:1 corresponds:1 determines:2 satisfies:1 weston:1 viewed:1 sorted:1 price:5 absence:1 change:2 specifically:2 uniformly:1 lemma:3 partly:1 experimental:1 indicating:1 select:1 pedregosa:1 support:2 arises:1 bioinformatics:2 tested:1 ex:1 |
5,899 | 6,337 | Dueling Bandits: Beyond Condorcet Winners to
General Tournament Solutions
Siddartha Ramamohan
Indian Institute of Science
Bangalore 560012, India
Arun Rajkumar
Xerox Research
Bangalore 560103, India
Shivani Agarwal
University of Pennsylvania
Philadelphia, PA 19104, USA
[email protected]
[email protected]
[email protected]
Abstract
Recent work on deriving O(log T ) anytime regret bounds for stochastic dueling
bandit problems has considered mostly Condorcet winners, which do not always
exist, and more recently, winners defined by the Copeland set, which do always
exist. In this work, we consider a broad notion of winners defined by tournament
solutions in social choice theory, which include the Copeland set as a special case
but also include several other notions of winners such as the top cycle, uncovered set,
and Banks set, and which, like the Copeland set, always exist. We develop a family
of UCB-style dueling bandit algorithms for such general tournament solutions, and
show O(log T ) anytime regret bounds for them. Experiments confirm the ability of
our algorithms to achieve low regret relative to the target winning set of interest.
1
Introduction
There has been significant interest and progress in recent years in developing algorithms for dueling
bandit problems [1?11]. Here there are K arms; on each trial t, one selects a pair of arms (it , jt ) for
comparison, and receives a binary feedback signal yt ? {0, 1} indicating which arm was preferred.
Most work on dueling bandits is in the stochastic setting and assumes a stochastic model ? a preference
matrix P of pairwise comparison probabilities Pij ? from which the feedback signals yt are drawn;
as with standard stochastic multi-armed bandits, the target here is usually to design algorithms with
O(ln T ) regret bounds, and where possible, O(ln T ) anytime (or ?horizon-free?) regret bounds, for
which the algorithm does not need to know the horizon or number of trials T in advance.
Early work on dueling bandits often assumed strong conditions on the preference matrix P, such as
existence of a total order, under which there is a natural notion of a ?maximal? element with respect to
which regret is measured. Recent work has sought to design algorithms under weaker conditions on
P; most work, however, has assumed the existence of a Condorcet winner, which is an arm i that beats
every other arm j (Pij > 12 ?j 6= i), and which reduces to the maximal element when a total order
exists. Unfortunately, the Condorcet winner does not always exist, and this has motivated a search for
other natural notions of winners, such as Borda winners and the Copeland set (see Figure 1).1 Among
these, the only work that offers anytime O(ln T ) regret bounds is the recent work of Zoghi et al. [11]
on Copeland sets. In this work, we consider defining winners in dueling bandits via the natural notion
of tournament solutions used in social choice theory, of which the Copeland set is a special case.
We develop general upper confidence bound (UCB) style dueling bandit algorithms for a number
of tournament solutions including the top cycle, uncovered set, and Banks set, and prove O(ln T )
anytime regret bounds for them, where the regret is measured relative to the tournament solution of
interest. Our proof technique is modular and can be used to develop algorithms with similar bounds
for any tournament solution for which a ?selection procedure? satisfying certain ?safety conditions?
can be designed. Experiments confirm the ability of our algorithms to achieve low regret relative to
the target winning set of interest.
1
Recently, Dudik et al. [10] also studied von Neumann winners, although they did so in a different (contextual)
setting, leading to O(T 1/2 ) and O(T 2/3 ) regret bounds.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Algorithm
Condition on P
Target Winner Anytime?
MultiSBM [5]
U-Lin
Condorcet winner
X
IF [1]
TO+SST+STI
Condorcet winner
?
BTMB [2]
TO+RST+STI
Condorcet winner
?
RUCB [6]
CW
Condorcet winner
X
MergeRUCB [7]
CW
Condorcet winner
X
RMED [9]
CW
Condorcet winner
X
SECS [8]
UBW
Borda winner
?
PBR-SE [4]
DBS
Borda winner
?
PBR-CO [4]
Any P without ties
Copeland set
?
SAVAGE-BO [3] Any P without ties
Borda winner
?
SAVAGE-CO [3] Any P without ties
Copeland set
?
CCB, SCB [11] Any P without ties
Copeland set
X
UCB-TC
Any P without ties
Top cycle
X
UCB-UC
Any P without ties
Uncovered set
X
UCB-BA
Any P without ties
Banks set
X
Figure 1: Summary of algorithms for stochastic dueling bandit problems that have O(ln T ) regret
bounds, together with corresponding conditions on the underlying preference matrix P, target winners
used in defining regret, and whether the regret bounds are "anytime". The figure on the left shows
relations between some of the commonly studied conditions on P (see Table 1 for definitions). The
algorithms in the lower part of the table (shown in red) are proposed in this paper.
2
Dueling Bandits, Tournament Solutions, and Regret Measures
Dueling Bandits. We denote by [K] = {1, . . . , K} the set of K arms. On each trial t, the learner
selects a pair of arms (it , jt ) ? [K] ? [K] (with it possibly equal to jt ), and receives feedback in
the form of a comparison outcome yt ? {0, 1}, with yt = 1 indicating it was preferred over jt and
yt = 0 indicating the reverse. The goal of the learner is to select as often as possible from a set of
?good? or ?winning? arms, which we formalize below as a tournament solution.
The pairwise feedback on each trial is assumed to be generated stochastically according to a fixed
but unknown pairwise preference model represented by a preference matrix P ? [0, 1]K?K with
Pij + Pji = 1 ?i, j: whenever arms i and j are compared, i is preferred to j with probability Pij , and
j to i with probability Pji = 1?Pij . Thus for each trial t, we have yt ? Bernoulli(Pit jt ). We assume
throughout this paper that there are are no ?ties? between distinct arms, i.e. that Pij 6= 21 ?i 6= j.2 We
denote by PK the set of all
matrices over K arms:
such preference
PK =
P ? [0, 1]K?K : Pij + Pji = 1 ?i, j ; Pij 6= 12 ?i 6= j .
For any pair of arms (i, j), we will define the margin of (i, j) w.r.t. P as
1
?P
ij = |Pij ? 2 | .
Previous work on dueling bandits has considered a variety of conditions on P; see Table 1 and
Figure 1. Our interest here is in designing algorithms that have regret guarantees under minimal
restrictions on P. To this end, we will consider general notions of winners that are derived from a
natural tournament associated with P, and that are always guaranteed to exist. We will say an arm i
beats an arm j w.r.t. P if Pij > 12 ; we will express this as a binary relation P on [K]:
i P j ?? Pij > 12 .
The tournament associated with P is then simply TP = ([K], EP ), where EP = {(i, j) : i P j}.
Two frequently studied notions of winners in previous work on dueling bandits, both of which are
derived from the tournament TP (and which are the targets of previous anytime regret bounds), are
the Condorcet winner when it exists, and the Copeland set in general:
Definition 1 (Condorcet winner). Let P ? PK . If there exists an arm i? ? [K] such that
i? P j ?j 6= i? , then i? is said to be a Condorcet winner w.r.t. P.
Definition 2 (Copeland set). Let P ? PK . The Copeland set w.r.t. P, denoted CO(P), is defined as
the set of all arms in [K] that beat the maximal number
P:
P of arms w.r.t.
CO(P) = arg max j6=i 1 i P j .
i?[K]
Here we are interested in more general notions of winning sets derived from the tournament TP .
2
The assumption of no ties was also made in deriving regret bounds w.r.t. to the Copeland set in [3, 4, 11],
and exists implicitly in [1, 2] as well.
2
Table 1: Commonly studied conditions on the preference matrix P.
Condition on P
Property satisfied by P
Utility-based with linear link (U-Lin) ?u ? [0, 1]K : Pij =
Total order (TO)
Strong stochastic transitivity (SST)
?? ? Sn : Pij >
Pij >
1
, Pjk
2
1
2
1
2
>
1?(ui ?uj )
2
?i, j
?? ?(i) < ?(j)
=? Pik ? max(Pij , Pjk )
Relaxed stochastic transitivity (RST) ?? ? 1 : Pij > 12 , Pjk > 12 =? Pik ? 12 ?
Stochastic triangle inequality (STI)
Condorcet winner (CW)
Pij > 12 , Pjk >
1
2
1
2
=? Pik ? Pij + Pjk ?
max(Pij ? 12 , Pjk ? 21 )
1
2
?i : Pij > ?j 6= i
P
P
?i : k6=i Pik > k6=j Pjk ?j 6= i
P
P
k6=i Pik 6=
k6=j Pjk ?i 6= j
Unique Borda winner (UBW)
Distinct Borda scores (DBS)
Hudry?
P_5
1
1
?
Tennis
1?
2
5?
6?
7?
8?
9?
10?
11?
12?
13?
1
4
4?
CO
4
5
CO?
2?
7
3
UC/BA
TC
5
2
3?
BA?
UC?
6
3
CO
8
UC/BA
TC?
TC
Figure 2: Examples of various tournaments together with their corresponding tournament solutions.
Edges that are not explicitly shown are directed from left to right; edges that are incident on subsets of
nodes (rounded rectangles) apply to all nodes within. Left: A tournament on 5 nodes with gradually
discriminating tournament solutions. Middle: The Hudry tournament on 13 nodes with disjoint
Copeland and Banks sets. Right: A tournament on 8 nodes based on ATP tennis match records.
Tournament Solutions. Tournament solutions have long been used in social choice and voting theory
to define winners in general tournaments when no Condorcet winner exists [12, 13]. Specifically, a
tournament solution is any mapping that maps each tournament on K nodes to a subset of ?winning?
nodes in [K]; for our purposes, we will define a tournament solution to be any mapping S : PK ?2[K]
that maps each preference matrix P (via the induced tournament TP ) to a subset of winning arms
S(P) ? [K].3 The Copeland set is one such tournament solution. We consider three additional
tournament solutions in this paper: the top cycle, the uncovered set, and the Banks set, all of which
offer other natural generalizations of the Condorcet winner. These tournament solutions are motivated
by different considerations (ranging from dominance to covering to decomposition into acyclic
subtournaments) and have graded discriminative power, and can therefore be used to match the needs
of different applications; see [12] for a comprehensive survey.
Definition 3 (Top cycle). Let P ? PK . The top cycle w.r.t. P, denoted TC(P), is defined as the
smallest set W ? [K] for which i P j ?i ? W, j ?
/ W.
Definition 4 (Uncovered set). Let P ? PK . An arm i is said to cover an arm j w.r.t. P if i P j
and ?k : j P k =? i P k. The uncovered set w.r.t. P, denoted UC(P), is defined as the set of
all arms that are not covered by any other arm w.r.t. P:
UC(P) = i ? [K] : 6 ?j ? [K] s.t. j covers i w.r.t. P .
Definition 5 (Banks set). Let P ? PK . A subtournament T = (V, E) of TP , where V ? [K] and
E = EP |V ?V , is said to be maximal acyclic if (i) T is acyclic, and (ii) no other subtournament
containing T is acyclic. Denote by MAST(P) the set of all maximal acyclic subtournaments of TP ,
and for each T ? MAST(P), denote by m? (T ) the maximal element of T . Then the Banks set w.r.t.
P, denoted BA(P), is defined as the set of maximal elements of all maximal acyclic subtournaments
of TP :
BA(P) = m? (T ) : T ? MAST(P) .
It is known that BA(P) ? UC(P) ? TC(P) and CO(P) ? UC(P) ? TC(P). In general, BA(P)
and CO(P) may intersect, although they can also be disjoint. When P contains a Condorcet winner
i? , all four tournament solutions reduce to just the singleton set {i? }. See Figure 2 for examples.
3
Strictly speaking, the mapping S must be invariant under permutations of the node labels.
3
Regret Measures. When P admits a Condorcet winner i? , the individual regret of an arm i is usually
CW
defined as rP
(i) = ?P
i? ,i , and the cumulative regret over T trials of an algorithm A that selects arms
PT
CW
(it , jt ) on trial t is then generally defined as RCW
T (A) = t=1 rP (it , jt ), where the pairwise regret
1
CW
CW
CW
CW
CW
rP
(i, j) is either the average regret 2 rP
(i) + rP
(j) , the strong regret max rP
(i), rP
(j) ,
CW
CW
or the weak regret min rP
(i), rP
(j) [1, 2, 6, 7, 9].4 When the target winner is a tournament
solution S, we can similarly define a suitable notion of individual regret of an arm i w.r.t. S, and then
use this to define pairwise regrets as above.
In particular, for the three tournament solutions discussed above, we will define the following natural
notions of(
individual regret:
(
max?
?P
/ UC(P)
max ?P
if i ?
/ TC(P)
i? ,i if i ?
i? ,i
?
TC
UC
rP (i) = i ?TC(P)
;
rP (i) = i? ?UC(P):i covers i
;
0
if i ? TC(P)
0
if i ? UC(P)
(
max
?P
/ BA(P)
m? (T ),i if i ?
BA
rP (i) = T ?MAST(P):T contains i
0
if i ? BA(P).
In the special case when P admits a Condorcet winner i? , the three individual regrets above all reduce
CW
to the Condorcet individual regret, rP
(i) = ?P
i? ,i . In each case above, the cumulative regret of an
algorithm A over T trials will then be given by P
T
S
RST (A) =
t=1 rP (it , jt ) ,
S
S
S
where the pairwise regret rP
(i, j) can be the average regret 12 rP
(i) + rP
(j) , the strong regret
S
S
S
S
max rP
(i), rP
(j) , or the weak regret min rP
(i), rP
(j) . Our regret bounds will hold for each of
these forms of pairwise regret. In fact, our regret bounds hold for any measure of pairwise regret
S
rP
(i, j) that satisfies the following three conditions:
S
S
(i) rP
(?, ?) is normalized: rP
(i, j) ? [0, 1] ?i, j;
S
S
S
(ii) rP
(?, ?) is symmetric: rP
(i, j) = rP
(j, i) ?i, j; and
S
S
(iii) rP
(?, ?) is proper w.r.t. S: i, j ? S(P) =? rP
(i, j) = 0.
It is easy to verify that for the three tournament solutions above, the average, strong and weak pairwise
regrets above all satisfy these conditions.5,6
3
UCB-TS: Generic Dueling Bandit Algorithm for Tournament Solutions
Algorithm. In Algorithm 1 we outline a generic dueling bandit algorithm, which we call UCB-TS,
for identifying winners from a general tournament solution. The algorithm can be instantiated to
specific tournament solutions by designing suitable selection procedures S ELECT P ROC -TS (more
details below). The algorithm maintains a matrix Ut ? RK?K
of upper confidence bounds (UCBs)
+
t
on the unknown pairwise preference probabilities Pij . The UCBs are constructed by adding a
Uij
confidence term to the current empirical estimate of Pij ; the exploration parameter ? > 12 controls
the exploration rate of the algorithm via the size of the confidence terms used. On each trial t, the
algorithm selects a pair of arms (it , jt ) based on the current UCB matrix Ut using the selection
procedure S ELECT P ROC -TS; on observing the preference feedback yt , the algorithm then updates
the UCBs for all pairs of arms (i, j) (the UCBs of all pairs (i, j) grow slowly with t so that pairs that
have not been selected for a while have an increasing chance of being explored).
In order to instantiate the UCB-TS algorithm to a particular tournament solution S, the critical step is
in designing the selection procedure S ELECT P ROC -TS in a manner that yields good regret bounds
for a suitable regret measure w.r.t. S. Below we identify general conditions on S ELECT P ROC -TS
that allow for O(ln T ) anytime regret bounds to be obtained (we will design procedures satisfying
these conditions for the three tournament solutions of interest in Section 4).
4
The notion of regret used in [5] was slightly different.
It is also easy to verify that defining the individual regrets as the minimum or average margin relative to
all relevant arms in the tournament solution of interest (instead of the maximum margin as done above) also
preserves these properties, and therefore our regret bounds hold for the resulting variants of regret as well.
6
One can also consider defining the individual regrets simply in terms of mistakes relative to the target
TC
tournament solution of interest, e.g. rP
(i) = 1(i ?
/ TC(P)), and define average/strong/weak pairwise regrets
in terms of these; our bounds also apply in this case.
5
4
Algorithm 1 UCB-TS
1: Require: Selection procedure S ELECT P ROC -TS
2: Parameter: Exploration parameter ? > 12
3: Initialize: ? (i, j) ? [K] ? [K]:
1
Nij
= 0 // # times (i, j) has been compared; Wij1 = 0 // # times i has won against j;
1
if i = j
1
Uij
// UCB for Pij .
= 2
1 otherwise
4: For t = 1, 2, . . . do:
5:
? Select (it , jt ) ? S ELECT P ROC -TS(Ut )
6:
? Receive preference feedback yt ? {0, 1}
7:
? Update counts: ? (i, j) ? [K] ? [K]:
? t
t
if (i, j) = (it , jt )
?Wij + yt
Nij + 1 if {i, j} = {it , jt }
t+1
t+1
t
;
W
=
Nij
=
W
+
(1
?
y
)
if
(i, j) = (jt , it )
t
ij
t
? ijt
otherwise
Nij
Wij
otherwise.
8:
? Update UCBs: ? (i, j) ? [K]
?
[K]:
?
? 12
if i = j
?
?
t+1
t+1
1
if i 6= j and Nij
=0
Uij =
t+1
q
?
Wij
?
ln
t
?
?
+
otherwise.
t+1
Nij
t+1
Nij
Regret Analysis. We show here that if the selection procedure S ELECT P ROC -TS satisfies two natural
conditions w.r.t. a tournament solution S, namely the safe identical-arms condition w.r.t. S and the
safe distinct-arms condition w.r.t. S, then the resulting instantiation of the UCB-TS algorithm has an
O(ln T ) regret bound for any regret measure that is normalized, symmetric, and proper w.r.t S. The
first condition ensures that if the UCB matrix U given as input to S ELECT P ROC -TS in fact forms
an element-wise upper bound on the true preference matrix P and S ELECT P ROC -TS returns two
identical arms (i, i), then i must be in the winning set S(P). The second condition ensures that if U
upper bounds P and S ELECT P ROC -TS returns two distinct arms (i, j), i 6= j, then either both i, j are
in the winning set S(P), or the UCBs Uij , Uji are still loose (and (i, j) should be explored further).
Definition 6 (Safe identical-arms condition). Let S : PK ?2[K] be a tournament solution. We will
say a selection procedure S ELECT P ROC -TS : RK?K
?[K] ? [K] satisfies the safe identical-arms
+
condition w.r.t. S if for all P ? PK , U ? RK?K
such
that Pij ? Uij ?i, j, we have
+
S ELECT P ROC -TS(U) = (i, i) =? i ? S(P) .
Definition 7 (Safe distinct-arms condition). Let S : PK ?2[K] be a tournament solution. We will
say a selection procedure S ELECT P ROC -TS : RK?K
?[K] ? [K] satisfies the safe distinct-arms
+
K?K
condition w.r.t. S if for all P ? PK , U ? R+
such that Pij ? Uij ?i, j, we have
S ELECT P ROC -TS(U) = (i, j), i 6= j =? (i, j) ? S(P) ? S(P) or Uij + Uji ? 1 + ?P
ij .
In what follows, for K ? Z+ , ? > 21 , and ? ? (0, 1], we define
(4? ? 1)K 2 1/(2??1)
C(K, ?, ?) =
.
(2? ? 1)?
This quantity, which also appears in the analysis of RUCB [6], acts as an initial time period beyond
which all the UCBs Uij upper bound Pij w.h.p. We have the following result (proof in Appendix A):
Theorem 8 (Regret bound for UCB-TS algorithm). Let S : PK ?2[K] be a tournament solution,
and suppose the selection procedure S ELECT P ROC -TS used in the UCB-TS algorithm satisfies both
the safe identical-arms condition w.r.t. S and the safe distinct-arms condition w.r.t. S. Let P ? PK ,
S
and let rP
(i, j) be any normalized, symmetric, proper regret measure w.r.t. S. Let ? > 12 and
? ? (0, 1]. Then with probability at least 1 ? ? (over the feedback yt drawn randomly from P and
any internal randomness in S ELECT P ROC -TS), the cumulative regret of the UCB-TS algorithm with
exploration parameter ? is upper bounded as
S
X
rP
(i, j)
S
RT UCB-TS(?) ? C(K, ?, ?) + 4? (ln T )
.
2
(?P
ij )
i<j:(i,j)?S(P)?S(P)
/
5
Figure 3: Inferences about the direction of preference between arms i and j under the true preference
matrix P based on the UCBs Uij , Uji , assuming that Pij , Pji are upper bounded by Uij , Uji .
4
Dueling Bandit Algorithms for Top Cycle, Uncovered Set, and Banks Set
Below we give selection procedures satisfying both the safe identical-arms condition and the safe
distinct-arms condition above w.r.t. the top cycle, uncovered set, and Banks set, which immediately
yield dueling bandit algorithms with O(ln T ) regret bounds w.r.t. these tournament solutions. An
instantiation of our framework to the Copeland set is also discussed in Appendix E.
The selection procedure for each tournament solution is closely related to the corresponding winner
determination algorithm for that tournament solution; however, while a standard winner determination
algorithm would have access to the actual tournament TP , the selection procedures we design can
only guess (with high confidence) the preference directions between some pairs of arms based on the
UCB matrix U. In particular, if the entries of U actually upper bound those of P, then for any pair of
arms i and j, one of the following must be true (see also Figure 3):
? Uij < 21 , in which case Pij ? Uij < 12 and therefore j P i;
? Uji < 12 , in which case Pji ? Uji < 12 and therefore i P j;
? Uij ? 12 and Uji ? 12 , in which case the direction of preference between i and j in TP is
unresolved.
The selection procedures we design manage the exploration-exploitation tradeoff by adopting an
optimism followed by pessimism approach, similar to that used in the design of the RUCB and CCB
algorithms [6, 11]. Specifically, our selection procedures first optimistically identify a potential
winning arm a based on the UCBs U (by optimistically setting directions of any unresolved edges in
TP in favor of the arm being considered; see Figure 3). Once a putative winning arm a is identified,
the selection procedures then pessimistically find an arm b that has the greatest chance of invalidating
a as a winning arm, and select the pair (a, b) for comparison.
4.1
UCB-TC: Dueling Bandit Algorithm for Top Cycle
The selection procedure S ELECT P ROC -TC (Algorithm 2), when instantiated in the UCB-TS template,
yields the UCB-TC dueling bandit algorithm. Intuitively, S ELECT P ROC -TC constructs an optimistic
estimate A of the top cycle based on the UCBs U (line 2), and selects a potential winning arm a from
A (line 3); if there is no unresolved arm against a (line 5), then it returns (a, a) for comparison, else
it selects the best-performing unresolved opponent b (line 8) and returns (a, b) for comparison. We
have the following result (see Appendix B for a proof):
Theorem 9 (S ELECT P ROC -TC satisfies safety conditions w.r.t. TC). S ELECT P ROC -TC satisfies
both the safe identical-arms condition and the safe distinct-arms condition w.r.t. TC.
By virtue of Theorem 8, this immediately yields the following regret bound for UCB-TC:
Corollary 10 (Regret bound for UCB-TC algorithm). Let P ? PK . Let ? > 12 and ? ? (0, 1].
Then with probability at least 1 ? ?, the cumulative regret of UCB-TC w.r.t. the top cycle satisfies
TC
X
rP
(i, j)
RTC
UCB-TC(?)
?
C(K,
?,
?)
+
4?
(ln
T
)
.
T
2
(?P
ij )
i<j:(i,j)?TC(P)?TC(P)
/
4.2
UCB-UC: Dueling Bandit Algorithm for Uncovered Set
The selection procedure S ELECT P ROC -UC (Algorithm 3), when instantiated in the UCB-TS template,
yields the UCB-UC dueling bandit algorithm. S ELECT P ROC -UC relies on the property that an
uncovered arm beats every other arm either directly or via an intermediary [12]. S ELECT P ROC -UC
optimistically identifies such a potentially uncovered arm a based on the UCBs U (line 5); if it can be
resolved that a is indeed uncovered (line 7), then it returns (a, a), else it selects the best-performing
unresolved opponent b when available (line 11), or an arbitrary opponent b otherwise (line 13), and
returns (a, b). We have the following result (see Appendix C for a proof):
6
Algorithm 2 S ELECT P ROC -TC
1: Input: UCB matrix U ? RK?K
+
2: Let A ? [K] be any minimal set satisfying
Uij ? 21 ?i ? A, j ?
/A
3: Select any a ? argmaxi?A minj6?A Uij
4: B ? {i 6= a : Uai ? 12 ? Uia ? 12 }
5: if B = ? then
6:
Return (a, a)
7: else
8:
Select any b ? argmaxi?B Uia
9:
Return (a, b)
10: end if
Algorithm 4 S ELECT P ROC -BA
K?K
1: Input: UCB matrix U ? R+
2: Select any j1 ? [K]
3: J ? {j1 } // Initialize candidate Banks
trajectory
4: s ? 1 // Initialize size of candidate
Banks trajectory
5: traj_found = FALSE
6: while NOT(traj_found) do
7:
C ? {i ?
/ J : Uij > 12 ?j ? J }
8:
if C = ? then
9:
traj_found = TRUE
10:
break
11:
else
12:
js+1 ? argmaxi?C (minj?J Uij )
13:
J ? J ? {js+1 }
14:
s?s+1
15:
end if
16: end while
17: if ?1 ? q < r ? s: Ujq ,jr < 21 then
18:
a ? js
19:
Return (a, a)
20: else
21:
Select any (e
q , re) ? arg max Ujq ,jr
Algorithm 3 S ELECT P ROC -UC
1: Input: UCB matrix U ? RK?K
+
2: for i = 1 to K
Pdo
3:
y(i) ? j 1(Uij ? 12 ) +
P
1
1
j,k 1(Uij ? 2 ? Ujk ? 2 )
4: end for
5: Select any a ? argmaxi y(i)
6: B ? {i 6= a : Uai ? 12 ? Uia ? 12 }
7: if ?i 6= a : (Uia < 12 ) ?
(?j : Uij < 12 ? Uja < 12 ) then
8:
Return (a, a)
9: else
10:
if B 6= ? then
11:
Select any b ? argmaxi?B Uia
12:
else
13:
Select any b 6= a
14:
end if
15:
Return (a, b)
16: end if
(q,r):1?q<r?s
22:
(a, b) ? (jqe, jre)
23:
Return (a, b)
24: end if
Theorem 11 (S ELECT P ROC -UC satisfies safety conditions w.r.t. UC). S ELECT P ROC -UC satisfies both the safe identical-arms condition and the safe distinct-arms condition w.r.t. UC.
Again, by virtue of Theorem 8, this immediately yields the following regret bound for UCB-UC:
Corollary 12 (Regret bound for UCB-UC algorithm). Let P ? PK . Let ? > 12 and ? ? (0, 1].
Then with probability at least 1 ? ?, the cumulative regret of
w.r.t. the uncovered setsatisfies
UCB-UCX
UC
rP
(i, j)
UC
RT UCB-UC(?) ? C(K, ?, ?) + 4? (ln T )
.
2
(?P
ij )
4.3
i<j:(i,j)?UC(P)?UC(P)
/
UCB-BA: Dueling Bandit Algorithm for Banks Set
The selection procedure S ELECT P ROC -BA (Algorithm 4), when instantiated in the UCB-TS template,
yields the UCB-BA dueling bandit algorithm. Intuitively, S ELECT P ROC -BA first constructs an
optimistic candidate maximal acyclic subtournament (set J ; also called a Banks trajectory) based on
the UCBs U (lines 2?16). If this subtournament is completely resolved (line 17), then its maximal
arm a is picked and (a, a) is returned; if not, an unresolved pair (a, b) is returned that is most likely
to fail the acyclicity/transitivity property. We have the following result (see Appendix D for a proof):
Theorem 13 (S ELECT P ROC -BA satisfies safety conditions w.r.t. BA). S ELECT P ROC -BA satisfies both the safe identical-arms condition and the safe distinct-arms condition w.r.t. BA.
Again, by virtue of Theorem 8, this immediately yields the following regret bound for UCB-BA:
Corollary 14 (Regret bound for UCB-BA algorithm). Let P ? PK . Let ? > 12 and ? ? (0, 1].
Then with probability at least 1 ? ?, the cumulative regretof UCB-BA w.r.t. the Banks set satisfies
BA
X
rP
(i, j)
BA
RT UCB-BA(?) ? C(K, ?, ?) + 4? (ln T )
.
2
(?P
ij )
i<j:(i,j)?BA(P)?BA(P)
/
7
Figure 4: Regret performance of our algorithms compared to BTMB, RUCB, SAVAGE-CO, and
CCB. Results are averaged over 10 independent runs; light colored bands represent one standard
error. Left: Top cycle regret of UCB-TC on PMSLR . Middle: Uncovered set regret of UCB-UC on
PTennis . Right: Banks set regret of UCB-BA on PHudry . See Appendix F.2 for additional results.
5
Experiments
Here we provide an empirical evaluation of the performance of the proposed dueling bandit algorithms.
We used the following three preference matrices in our experiments, one of which is synthetic and
two real-world, and none of which posesses a Condorcet winner:
? PHudry ? P13 : This is constructed from the Hudry tournament shown in Figure 2(b); as noted
earlier, this is the smallest tournament whose Copeland set and Banks set are disjoint [14].
Details of this preference matrix can be found in Appendix F.1.1.
? PTennis ? P8 : This is constructed from real data collected from the Association of Tennis
Professionals? (ATP?s) website on outcomes of tennis matches played among 8 well-known
professional tennis players. The tournament associated with PTennis is shown in Figure 2(c);
further details of this preference matrix can be found in Appendix F.1.2.
? PMSLR ? P16 : This is constructed from real data from the Microsoft Learning to Rank
(MSLR) Web10K data set. Further details can be found in Appendix F.1.3.
We compared the performance of our algorithms, UCB-TC, UCB-BA, and UCB-UC, with four
previous dueling bandit algorithms: BTMB [2], RUCB [6], SAVAGE-CO [3], and CCB [11].7 In each
case, we assessed the algorithms in terms of average pairwise regret relative to the target tournament
solution of interest (see Section 2), averaged over 10 independent runs. A sample of the results
is shown in Figure 4; as can be seen, the proposed algorithms UCB-TC, UCB-UC, and UCB-BA
generally outperform existing baselines in terms of minimizing regret relative to the top cycle, the
uncovered set, and the Banks set, respectively. Additional results, including results with the Copeland
set variant of our algorithm, UCB-CO, can be found in Appendix F.2.
6
Conclusion
In this paper, we have proposed the use of general tournament solutions as sets of ?winning? arms
in stochastic dueling bandit problems, with the advantage that these tournament solutions always
exist and can be used to define winners according to criteria that are most relevant to a given dueling
bandit setting. We have developed a UCB-style family of algorithms for such general tournament
solutions, and have shown O(ln T ) anytime regret bounds for the algorithm instantiated to the top
cycle, uncovered set, and Banks set (as well as the Copeland set; see Appendix E).
While our approach has an appealing modular structure both algorithmically and in our proofs, an
open question concerns the optimality of our regret bounds in their dependence on the number of
arms K. For the Condorcet winner, the MergeRUCB algorithm [7] has an anytime regret bound of
the form O(K ln T ); for the Copeland set, the SCB algorithm [11] has an anytime regret bound of the
form O(K ln K ln T ). In the worst case, our regret bounds are of the form O(K 2 ln T ). Is it possible
that for the top cycle, uncovered set, and Banks set, one can also show an ?(K 2 ln T ) lower bound
on the regret? Or can our regret bounds or algorithms be improved? We leave a detailed investigation
of this issue to future work.
Acknowledgments. Thanks to the anonymous reviewers for helpful comments and suggestions. SR
thanks Google for a travel grant to present this work at the conference.
7
For all the UCB-based algorithms (including our algorithms, RUCB, and CCB), we set the exploration
parameter ? to 0.51; for SAVAGE-CO, we set the confidence parameter ? to 1/T ; and for BTMB, we set ? to
1/T and chose ? to satisfy the ?-relaxed stochastic transitivity property for each preference matrix.
8
References
[1] Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. The K-armed dueling
bandits problem. Journal of Computer and System Sciences, 78(5):1538?1556, 2012.
[2] Yisong Yue and Thorsten Joachims. Beat the mean bandit. In Proceedings of the 28th
International Conference on Machine Learning, 2011.
[3] Tanguy Urvoy, Fabrice Clerot, Raphael F?raud, and Sami Naamane. Generic exploration and
k-armed voting bandits. In Proceedings of the 30th International Conference on Machine
Learning, 2013.
[4] R?bert Busa-Fekete, Balazs Szorenyi, Weiwei Cheng, Paul Weng, and Eyke H?llermeier.
Top-k selection based on adaptive sampling of noisy preferences. In Proceedings of the 30th
International Conference on Machine Learning, 2013.
[5] Nir Ailon, Zohar Karnin, and Thorsten Joachims. Reducing dueling bandits to cardinal bandits.
In Proceedings of the 31st International Conference on Machine Learning, 2014.
[6] Masrour Zoghi, Shimon Whiteson, Remi Munos, and Maarten de Rijke. Relative upper confidence bound for the k-armed dueling bandit problem. In Proceedings of the 31st International
Conference on Machine Learning, 2014.
[7] Masrour Zoghi, Shimon Whiteson, and Maarten de Rijke. MergeRUCB: A method for largescale online ranker evaluation. In Proceedings of the 8th ACM International Conference on Web
Search and Data Mining, 2015.
[8] Kevin Jamieson, Sumeet Katariya, Atul Deshpande, and Robert Nowak. Sparse dueling bandits.
In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics,
2015.
[9] Junpei Komiyama, Junya Honda, Hisashi Kashima, and Hiroshi Nakagawa. Regret lower bound
and optimal algorithm in dueling bandit problem. In Proceedings of the 28th Conference on
Learning Theory, 2015.
[10] Miroslav Dud?k, Katja Hofmann, Robert E Schapire, Aleksandrs Slivkins, and Masrour Zoghi.
Contextual dueling bandits. In Proceedings of the 28th Conference on Learning Theory, 2015.
[11] Masrour Zoghi, Zohar S. Karnin, Shimon Whiteson, and Maarten de Rijke. Copeland dueling
bandits. In Advances in Neural Information Processing Systems 28, 2015.
[12] Felix Brandt, Markus Brill, and Paul Harrenstein. Tournament solutions. In Handbook of
Computational Social Choice. Cambridge University Press, 2016.
[13] Felix Brandt, Andre Dau, and Hans Georg Seedig. Bounds on the disparity and separation of
tournament solutions. Discrete Applied Mathematics, 187:41?49, 2015.
[14] Olivier Hudry. A smallest tournament for which the Banks set and the Copeland set are disjoint.
Social Choice and Welfare, 16(1):137?143, 1999.
[15] Kenneth A Shepsle and Barry R Weingast. Uncovered sets and sophisticated voting outcomes
with implications for agenda institutions. American Journal of Political Science, pages 49?74,
1984.
[16] Kevin Jamieson and Robert Nowak. Active ranking using pairwise comparisons. In Advances
in Neural Information Processing Systems, 2011.
9
| 6337 |@word katja:1 trial:9 exploitation:1 middle:2 open:1 atul:1 decomposition:1 fabrice:1 initial:1 uncovered:18 score:1 contains:2 disparity:1 existing:1 savage:5 contextual:2 current:2 must:3 j1:2 hofmann:1 ramamohan:1 designed:1 update:3 intelligence:1 selected:1 yr:1 instantiate:1 guess:1 website:1 record:1 colored:1 institution:1 node:8 honda:1 preference:21 brandt:2 constructed:4 prove:1 busa:1 manner:1 uja:1 pairwise:13 p8:1 upenn:1 indeed:1 frequently:1 multi:1 pbr:2 actual:1 armed:4 increasing:1 iisc:2 spain:1 underlying:1 bounded:2 what:1 developed:1 guarantee:1 every:2 voting:3 act:1 tie:9 mergerucb:3 control:1 grant:1 jamieson:2 safety:4 felix:2 mistake:1 optimistically:3 tournament:58 chose:1 studied:4 pit:1 co:13 averaged:2 directed:1 unique:1 acknowledgment:1 regret:79 procedure:19 intersect:1 empirical:2 confidence:7 masrour:4 selection:19 minj6:1 restriction:1 map:2 reviewer:1 yt:10 survey:1 identifying:1 immediately:4 deriving:2 maarten:3 notion:11 ccb:5 target:9 pt:1 suppose:1 olivier:1 designing:3 pa:1 element:5 rajkumar:1 satisfying:4 ep:3 worst:1 ensures:2 cycle:15 ui:1 learner:2 completely:1 triangle:1 resolved:2 represented:1 various:1 distinct:11 instantiated:5 argmaxi:5 hiroshi:1 artificial:1 kevin:2 outcome:3 whose:1 modular:2 say:3 otherwise:5 ability:2 favor:1 statistic:1 noisy:1 online:1 advantage:1 maximal:10 unresolved:6 raphael:1 relevant:2 achieve:2 rst:3 neumann:1 sea:1 leave:1 develop:3 pose:1 measured:2 ij:7 progress:1 strong:6 direction:4 safe:16 closely:1 stochastic:10 exploration:7 require:1 pjk:8 generalization:1 investigation:1 anonymous:1 raud:1 strictly:1 hold:3 considered:3 welfare:1 mapping:3 urvoy:1 naamane:1 sought:1 early:1 smallest:3 purpose:1 intermediary:1 travel:1 label:1 arun:1 always:6 corollary:3 derived:3 joachim:3 bernoulli:1 rank:1 zoghi:5 political:1 baseline:1 helpful:1 inference:1 p16:1 bandit:38 relation:2 uij:20 wij:3 selects:7 interested:1 josef:1 arg:2 among:2 issue:1 denoted:4 k6:4 special:3 ernet:2 uc:32 initialize:3 equal:1 once:1 construct:2 karnin:2 sampling:1 identical:9 broad:1 future:1 bangalore:2 cardinal:1 randomly:1 preserve:1 comprehensive:1 individual:7 microsoft:1 interest:9 mining:1 evaluation:2 weng:1 light:1 implication:1 edge:3 nowak:2 re:1 nij:7 minimal:2 p13:1 miroslav:1 earlier:1 cover:3 tp:10 subset:3 entry:1 synthetic:1 thanks:2 st:2 broder:1 international:7 discriminating:1 rounded:1 pessimism:1 together:2 von:1 again:2 satisfied:1 yisong:2 containing:1 manage:1 possibly:1 slowly:1 stochastically:1 american:1 style:3 leading:1 return:12 potential:2 singleton:1 de:3 sec:1 hisashi:1 satisfy:2 explicitly:1 ranking:1 break:1 picked:1 optimistic:2 observing:1 red:1 maintains:1 borda:6 sumeet:1 yield:8 identify:2 rijke:3 weak:4 none:1 trajectory:3 j6:1 randomness:1 minj:1 whenever:1 andre:1 definition:8 against:2 deshpande:1 copeland:22 proof:6 associated:3 anytime:12 ut:3 formalize:1 sophisticated:1 actually:1 appears:1 improved:1 done:1 just:1 ucx:1 receives:2 web:1 google:1 dau:1 siddartha:2 jre:1 usa:1 normalized:3 verify:2 true:4 clerot:1 dud:1 symmetric:3 eyke:1 transitivity:4 covering:1 elect:32 noted:1 won:1 criterion:1 outline:1 ranging:1 wise:1 consideration:1 recently:2 winner:42 discussed:2 association:1 significant:1 cambridge:1 atp:2 mathematics:1 similarly:1 access:1 tennis:5 han:1 j:3 recent:4 reverse:1 certain:1 balazs:1 inequality:1 binary:2 seen:1 minimum:1 additional:3 relaxed:2 dudik:1 period:1 barry:1 signal:2 ii:2 reduces:1 match:3 determination:2 offer:2 long:1 lin:2 variant:2 represent:1 adopting:1 agarwal:1 receive:1 else:7 grow:1 sr:1 comment:1 induced:1 yue:2 szorenyi:1 db:2 call:1 iii:1 easy:2 sami:1 weiwei:1 variety:1 ujk:1 pennsylvania:1 scb:2 identified:1 reduce:2 tradeoff:1 ranker:1 whether:1 motivated:2 optimism:1 utility:1 returned:2 speaking:1 generally:2 se:1 sst:2 covered:1 detailed:1 band:1 shivani:1 schapire:1 outperform:1 exist:6 llermeier:1 disjoint:4 algorithmically:1 discrete:1 uia:5 express:1 dominance:1 georg:1 four:2 drawn:2 rcw:1 kenneth:1 rectangle:1 year:1 sti:3 run:2 family:2 throughout:1 separation:1 putative:1 pik:5 appendix:11 bound:43 guaranteed:1 followed:1 played:1 cheng:1 junya:1 markus:1 katariya:1 kleinberg:1 min:2 optimality:1 performing:2 developing:1 according:2 xerox:1 ailon:1 jr:2 slightly:1 appealing:1 intuitively:2 gradually:1 invariant:1 thorsten:3 ln:19 count:1 loose:1 fail:1 know:1 rmed:1 end:8 ijt:1 available:1 opponent:3 komiyama:1 apply:2 generic:3 kashima:1 pji:5 professional:2 rp:34 existence:2 rucb:6 pessimistically:1 top:16 assumes:1 include:2 ucbs:12 uj:1 graded:1 question:1 quantity:1 rt:3 dependence:1 mslr:1 said:3 cw:14 link:1 condorcet:22 collected:1 assuming:1 minimizing:1 mostly:1 unfortunately:1 robert:4 potentially:1 ba:31 design:6 agenda:1 proper:3 unknown:2 upper:9 t:28 beat:5 defining:4 bert:1 arbitrary:1 aleksandrs:1 pair:11 namely:1 slivkins:1 barcelona:1 nip:1 zohar:2 beyond:2 usually:2 below:4 including:3 max:9 dueling:34 power:1 suitable:3 critical:1 natural:7 greatest:1 largescale:1 arm:62 identifies:1 tanguy:1 philadelphia:1 sn:1 nir:1 relative:8 permutation:1 suggestion:1 acyclic:7 incident:1 pij:28 bank:20 summary:1 free:1 weaker:1 allow:1 india:2 institute:1 template:3 munos:1 sparse:1 feedback:7 world:1 cumulative:6 commonly:2 made:1 adaptive:1 social:5 preferred:3 implicitly:1 confirm:2 active:1 instantiation:2 uai:2 handbook:1 assumed:3 discriminative:1 search:2 uji:7 table:4 whiteson:3 csa:2 did:1 pk:17 paul:2 roc:32 winning:13 candidate:3 shimon:3 rk:6 theorem:7 specific:1 jt:13 invalidating:1 explored:2 admits:2 virtue:3 concern:1 exists:5 false:1 adding:1 margin:3 horizon:2 tc:32 remi:1 simply:2 likely:1 bo:1 fekete:1 satisfies:14 chance:2 relies:1 acm:1 goal:1 rtc:1 specifically:2 nakagawa:1 reducing:1 brill:1 wij1:1 total:3 called:1 player:1 ucb:54 acyclicity:1 indicating:3 select:10 mast:4 internal:1 junpei:1 assessed:1 indian:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.